go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 20:05:38.363from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 20:03:18.687 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 20:03:18.687 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 20:03:18.687 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 20:03:18.687 Jan 28 20:03:18.687: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 20:03:18.689 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 20:03:18.829 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 20:03:18.91 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 20:03:18.991 (304ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 20:03:18.991 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 20:03:18.991 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/28/23 20:03:18.991 Jan 28 20:03:19.086: INFO: Getting bootstrap-e2e-minion-group-g3s5 Jan 28 20:03:19.086: INFO: Getting bootstrap-e2e-minion-group-mh3p Jan 28 20:03:19.086: INFO: Getting bootstrap-e2e-minion-group-0n1r Jan 28 20:03:19.161: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-mh3p condition Ready to be true Jan 28 20:03:19.161: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-0n1r condition Ready to be true Jan 28 20:03:19.161: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-g3s5 condition Ready to be true Jan 28 20:03:19.206: INFO: Node bootstrap-e2e-minion-group-mh3p has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-45m2p kube-proxy-bootstrap-e2e-minion-group-mh3p] Jan 28 20:03:19.206: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-45m2p kube-proxy-bootstrap-e2e-minion-group-mh3p] Jan 28 20:03:19.206: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-mh3p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:03:19.207: INFO: Node bootstrap-e2e-minion-group-0n1r has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-sdzdk kube-proxy-bootstrap-e2e-minion-group-0n1r] Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-sdzdk kube-proxy-bootstrap-e2e-minion-group-0n1r] Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-0n1r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:03:19.207: INFO: Node bootstrap-e2e-minion-group-g3s5 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5 volume-snapshot-controller-0] Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5 volume-snapshot-controller-0] Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-45m2p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-sdzdk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-tc6bx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-g3s5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-nsst5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:03:19.252: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-mh3p": Phase="Running", Reason="", readiness=true. Elapsed: 45.274217ms Jan 28 20:03:19.252: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-mh3p" satisfied condition "running and ready, or succeeded" Jan 28 20:03:19.254: INFO: Pod "metadata-proxy-v0.1-nsst5": Phase="Running", Reason="", readiness=true. Elapsed: 47.071294ms Jan 28 20:03:19.254: INFO: Pod "metadata-proxy-v0.1-nsst5" satisfied condition "running and ready, or succeeded" Jan 28 20:03:19.255: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.184527ms Jan 28 20:03:19.255: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-g3s5' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 19:51:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:02:44 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:02:44 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 19:51:23 +0000 UTC }] Jan 28 20:03:19.255: INFO: Pod "kube-dns-autoscaler-5f6455f985-tc6bx": Phase="Running", Reason="", readiness=true. Elapsed: 48.112923ms Jan 28 20:03:19.255: INFO: Pod "kube-dns-autoscaler-5f6455f985-tc6bx" satisfied condition "running and ready, or succeeded" Jan 28 20:03:19.257: INFO: Pod "metadata-proxy-v0.1-sdzdk": Phase="Running", Reason="", readiness=true. Elapsed: 50.60158ms Jan 28 20:03:19.257: INFO: Pod "metadata-proxy-v0.1-sdzdk" satisfied condition "running and ready, or succeeded" Jan 28 20:03:19.258: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g3s5": Phase="Running", Reason="", readiness=true. Elapsed: 50.561438ms Jan 28 20:03:19.258: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g3s5" satisfied condition "running and ready, or succeeded" Jan 28 20:03:19.258: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0n1r": Phase="Running", Reason="", readiness=true. Elapsed: 51.060258ms Jan 28 20:03:19.258: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0n1r" satisfied condition "running and ready, or succeeded" Jan 28 20:03:19.258: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-sdzdk kube-proxy-bootstrap-e2e-minion-group-0n1r] Jan 28 20:03:19.258: INFO: Getting external IP address for bootstrap-e2e-minion-group-0n1r Jan 28 20:03:19.258: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-0n1r(34.127.122.120:22) Jan 28 20:03:19.258: INFO: Pod "metadata-proxy-v0.1-45m2p": Phase="Running", Reason="", readiness=true. Elapsed: 50.874234ms Jan 28 20:03:19.258: INFO: Pod "metadata-proxy-v0.1-45m2p" satisfied condition "running and ready, or succeeded" Jan 28 20:03:19.258: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-45m2p kube-proxy-bootstrap-e2e-minion-group-mh3p] Jan 28 20:03:19.258: INFO: Getting external IP address for bootstrap-e2e-minion-group-mh3p Jan 28 20:03:19.258: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-mh3p(34.168.72.159:22) Jan 28 20:03:19.773: INFO: ssh prow@34.127.122.120:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 20:03:19.773: INFO: ssh prow@34.127.122.120:22: stdout: "" Jan 28 20:03:19.773: INFO: ssh prow@34.127.122.120:22: stderr: "" Jan 28 20:03:19.773: INFO: ssh prow@34.127.122.120:22: exit code: 0 Jan 28 20:03:19.773: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-0n1r condition Ready to be false Jan 28 20:03:19.777: INFO: ssh prow@34.168.72.159:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 20:03:19.777: INFO: ssh prow@34.168.72.159:22: stdout: "" Jan 28 20:03:19.777: INFO: ssh prow@34.168.72.159:22: stderr: "" Jan 28 20:03:19.777: INFO: ssh prow@34.168.72.159:22: exit code: 0 Jan 28 20:03:19.777: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-mh3p condition Ready to be false Jan 28 20:03:19.815: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:19.818: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:21.299: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.092494064s Jan 28 20:03:21.299: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-g3s5' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 19:51:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:02:44 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:02:44 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 19:51:23 +0000 UTC }] Jan 28 20:03:21.863: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:21.864: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:23.297: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.090281755s Jan 28 20:03:23.297: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-g3s5' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 19:51:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:02:44 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:02:44 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 19:51:23 +0000 UTC }] Jan 28 20:03:23.909: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:23.909: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:25.299: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 6.092278827s Jan 28 20:03:25.299: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 20:03:25.299: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5 volume-snapshot-controller-0] Jan 28 20:03:25.299: INFO: Getting external IP address for bootstrap-e2e-minion-group-g3s5 Jan 28 20:03:25.299: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-g3s5(34.145.35.125:22) Jan 28 20:03:25.818: INFO: ssh prow@34.145.35.125:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 20:03:25.818: INFO: ssh prow@34.145.35.125:22: stdout: "" Jan 28 20:03:25.818: INFO: ssh prow@34.145.35.125:22: stderr: "" Jan 28 20:03:25.818: INFO: ssh prow@34.145.35.125:22: exit code: 0 Jan 28 20:03:25.818: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-g3s5 condition Ready to be false Jan 28 20:03:25.860: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:25.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:25.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:27.903: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:27.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:27.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:29.946: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:30.042: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:30.042: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:31.989: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:32.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:32.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:34.032: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:34.130: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:34.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:36.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:36.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:36.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:38.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:38.217: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:38.217: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:40.166: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:40.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:40.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:42.213: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:42.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:42.308: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:44.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:44.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:44.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:46.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:46.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:46.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:48.342: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:48.440: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:48.441: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:50.386: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:50.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:50.485: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:52.428: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:52.527: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:52.528: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:54.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:54.568: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:54.571: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:56.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:56.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:56.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:58.557: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:58.654: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:58.657: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:00.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:00.697: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:00.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:02.704: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:02.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:02.743: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:04.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:04.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:04.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:06.790: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:06.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:06.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:08.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:08.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:08.885: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:10.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:10.933: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:10.933: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:12.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:12.977: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:12.978: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:14.962: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:15.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:15.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:17.005: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:17.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:17.065: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:19.048: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:19.108: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:19.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:21.090: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:21.152: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:21.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:23.134: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:23.197: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:23.199: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:25.179: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:25.241: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:25.243: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:27.223: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:27.284: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:27.286: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:29.266: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:29.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:29.330: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:31.310: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:31.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:31.374: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:33.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:33.418: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:33.418: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:35.400: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:35.462: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:35.463: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:37.445: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:37.506: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:37.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:39.489: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:39.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:39.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:41.532: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:41.594: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:41.595: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:43.576: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:43.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:43.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:45.618: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:45.682: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:45.682: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:47.661: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:47.749: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:47.749: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:49.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:49.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:49.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:51.755: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:51.863: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:51.864: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:53.798: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:53.906: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:53.907: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:55.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:55.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:55.950: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:57.903: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:57.993: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:57.993: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:59.946: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:00.037: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:00.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:01.990: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:02.083: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:02.085: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:04.036: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:04.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:04.132: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:06.077: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:06.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:06.175: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:08.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:08.216: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:08.218: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:10.165: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:10.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:10.263: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:12.208: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:12.304: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:12.306: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:14.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:14.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:14.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:16.295: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:16.392: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:16.393: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:18.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:18.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:18.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:20.381: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:20.435: INFO: Node bootstrap-e2e-minion-group-mh3p didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:05:20.437: INFO: Node bootstrap-e2e-minion-group-0n1r didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:05:22.424: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:24.467: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:26.468: INFO: Node bootstrap-e2e-minion-group-g3s5 didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:05:26.468: INFO: Node bootstrap-e2e-minion-group-0n1r failed reboot test. Jan 28 20:05:26.468: INFO: Node bootstrap-e2e-minion-group-g3s5 failed reboot test. Jan 28 20:05:26.468: INFO: Node bootstrap-e2e-minion-group-mh3p failed reboot test. Jan 28 20:05:26.468: INFO: Executing termination hook on nodes Jan 28 20:05:26.468: INFO: Getting external IP address for bootstrap-e2e-minion-group-0n1r Jan 28 20:05:26.468: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-0n1r(34.127.122.120:22) Jan 28 20:05:34.286: INFO: ssh prow@34.127.122.120:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 20:05:34.286: INFO: ssh prow@34.127.122.120:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 20:03:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 20:05:34.286: INFO: ssh prow@34.127.122.120:22: stderr: "" Jan 28 20:05:34.286: INFO: ssh prow@34.127.122.120:22: exit code: 0 Jan 28 20:05:34.286: INFO: Getting external IP address for bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:34.286: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-g3s5(34.145.35.125:22) Jan 28 20:05:37.838: INFO: ssh prow@34.145.35.125:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 20:05:37.838: INFO: ssh prow@34.145.35.125:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 20:03:35 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 20:05:37.838: INFO: ssh prow@34.145.35.125:22: stderr: "" Jan 28 20:05:37.838: INFO: ssh prow@34.145.35.125:22: exit code: 0 Jan 28 20:05:37.838: INFO: Getting external IP address for bootstrap-e2e-minion-group-mh3p Jan 28 20:05:37.838: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-mh3p(34.168.72.159:22) Jan 28 20:05:38.363: INFO: ssh prow@34.168.72.159:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 20:05:38.363: INFO: ssh prow@34.168.72.159:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 20:03:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 20:05:38.363: INFO: ssh prow@34.168.72.159:22: stderr: "" Jan 28 20:05:38.363: INFO: ssh prow@34.168.72.159:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 20:05:38.363 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/28/23 20:05:38.363 (2m19.372s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 20:05:38.363 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 20:05:38.363 Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-5f95b to bootstrap-e2e-minion-group-mh3p Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 988.64865ms (988.660887ms including waiting) Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-5f95b Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-5f95b_kube-system(d963f1ba-8d39-4169-912a-3ea2b305ba4d) Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: Get "http://10.64.1.11:8181/ready": dial tcp 10.64.1.11:8181: connect: connection refused Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-zkf5q to bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 4.754015323s (4.754025827s including waiting) Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-zkf5q_kube-system(bc56bd34-3571-4e4b-abe7-beb82134f4e9) Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-zkf5q Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:8181/ready": dial tcp 10.64.3.24:8181: connect: connection refused Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-zkf5q_kube-system(bc56bd34-3571-4e4b-abe7-beb82134f4e9) Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.30:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.34:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-zkf5q Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-5f95b Jan 28 20:05:38.414: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 20:05:38.414: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 20:05:38.414: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 20:05:38.414: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 20:05:38.414: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 20:05:38.414: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 20:05:38.414: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(29ec3e483e58679ee5f59a6031c5e501) Jan 28 20:05:38.414: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 20:05:38.414: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 20:05:38.414: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 20:05:38.414: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_513c5 became leader Jan 28 20:05:38.414: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1b6de became leader Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-6x7kx to bootstrap-e2e-minion-group-mh3p Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 638.516592ms (638.533876ms including waiting) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6x7kx_kube-system(ed70439e-4bcd-45f3-ab80-c3443614cb7f) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6x7kx_kube-system(ed70439e-4bcd-45f3-ab80-c3443614cb7f) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Liveness probe failed: Get "http://10.64.1.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-qb4t9 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.519410591s (2.519418935s including waiting) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-qb4t9_kube-system(c535b342-76b5-479d-8f04-e96ca247dfe5) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Liveness probe failed: Get "http://10.64.3.26:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Failed: Error: failed to get sandbox container task: no running task found: task cc5844e86e91665c11906665c81f3d4c5211312c2df4be494c37e0261f046d15 not found: not found Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-qb4t9_kube-system(c535b342-76b5-479d-8f04-e96ca247dfe5) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Liveness probe failed: Get "http://10.64.3.33:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-xvpcb to bootstrap-e2e-minion-group-0n1r Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 636.231986ms (636.24567ms including waiting) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": dial tcp 10.64.2.2:8093: connect: connection refused Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.8:8093/healthz": dial tcp 10.64.2.8:8093: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.8:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-xvpcb_kube-system(989c550e-f120-4c1b-9c3a-6df4b3fdde4c) Jan 28 20:05:38.414: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-qb4t9 Jan 28 20:05:38.414: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-xvpcb Jan 28 20:05:38.414: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-6x7kx Jan 28 20:05:38.414: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 20:05:38.414: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 20:05:38.414: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 28 20:05:38.414: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 28 20:05:38.414: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 28 20:05:38.414: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 28 20:05:38.414: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 28 20:05:38.414: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 28 20:05:38.414: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 28 20:05:38.414: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:05:38.414: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:05:38.414: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 28 20:05:38.414: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 28 20:05:38.414: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 28 20:05:38.414: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(f70ce176158303a9ebd031d7e3b6127a) Jan 28 20:05:38.414: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_3195f2fa-43b4-44c6-99b9-48340126a997 became leader Jan 28 20:05:38.414: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_79df5a90-5f1c-4226-91be-48b6f9dbf1b4 became leader Jan 28 20:05:38.414: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_de5cb362-ceae-4fe2-9999-2c22c1c438c2 became leader Jan 28 20:05:38.414: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2052b0a5-4de3-41f7-abae-084298efc321 became leader Jan 28 20:05:38.414: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_35a816ba-3468-4255-96ae-1484bc9888a9 became leader Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-tc6bx to bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 5.225574521s (5.225582217s including waiting) Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container autoscaler Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container autoscaler Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container autoscaler Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-tc6bx_kube-system(68e7acff-d47c-41a3-999e-81f6e6886b77) Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-tc6bx Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container autoscaler Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container autoscaler Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container autoscaler Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-tc6bx_kube-system(68e7acff-d47c-41a3-999e-81f6e6886b77) Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-tc6bx Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-0n1r_kube-system(9b011e80d8dc05f3f14727717fa821a7) Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-g3s5_kube-system(926ffa386cd1d6d2268581c1ed0b2f8c) Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-g3s5_kube-system(926ffa386cd1d6d2268581c1ed0b2f8c) Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-mh3p_kube-system(b150875e2fb427d0806b8243d6a9b58f) Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-mh3p_kube-system(b150875e2fb427d0806b8243d6a9b58f) Jan 28 20:05:38.414: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:05:38.414: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 28 20:05:38.414: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 28 20:05:38.414: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 28 20:05:38.414: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(51babbd1f81b742b53c210ccd4aba348) Jan 28 20:05:38.414: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_6d3679c9-8b91-439b-8dd5-7d1b052b0f95 became leader Jan 28 20:05:38.414: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_97f512eb-1061-47dc-9e27-98f52ceebe45 became leader Jan 28 20:05:38.414: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_75e50ff1-aee4-4d42-a84f-b94251206449 became leader Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-dgcll to bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 2.30054303s (2.300570468s including waiting) Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container default-http-backend Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container default-http-backend Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-dgcll Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container default-http-backend Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container default-http-backend Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Liveness probe failed: Get "http://10.64.3.27:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-dgcll Jan 28 20:05:38.414: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 28 20:05:38.414: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 28 20:05:38.414: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 28 20:05:38.414: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 28 20:05:38.414: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-45m2p to bootstrap-e2e-minion-group-mh3p Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 847.414224ms (847.440914ms including waiting) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.820556539s (1.820574424s including waiting) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-4b9h5 to bootstrap-e2e-master Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 880.932728ms (880.940631ms including waiting) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.873485565s (1.873503664s including waiting) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-nsst5 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 663.380312ms (663.388707ms including waiting) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.719868155s (1.719885142s including waiting) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-sdzdk to bootstrap-e2e-minion-group-0n1r Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 712.939789ms (712.956274ms including waiting) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.695636692s (1.695660104s including waiting) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-4b9h5 Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-45m2p Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-sdzdk Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-nsst5 Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-lwrsb to bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.471766127s (3.471785385s including waiting) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.674813094s (2.674841129s including waiting) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-lwrsb Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-lwrsb Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-zddjc to bootstrap-e2e-minion-group-0n1r Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.258017443s (1.258032513s including waiting) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 935.578053ms (935.586846ms including waiting) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": dial tcp 10.64.2.3:10250: connect: connection refused Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": dial tcp 10.64.2.3:10250: connect: connection refused Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-zddjc Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.7:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.7:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Container metrics-server failed liveness probe, will be restarted Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Failed: Error: failed to get sandbox container task: no running task found: task 93118149c87c74675ce0d5095e2845a398f21d95fd8ae04827f4f38ded7adf60 not found: not found Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-zddjc_kube-system(75bf20cf-455a-48e7-8784-bd1f4f74d211) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-zddjc_kube-system(75bf20cf-455a-48e7-8784-bd1f4f74d211) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.11:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-zddjc Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 1.912364661s (1.912373502s including waiting) Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container volume-snapshot-controller Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container volume-snapshot-controller Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container volume-snapshot-controller Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(b6b28b8a-55e3-411f-8ff1-7da0eec83766) Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container volume-snapshot-controller Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container volume-snapshot-controller Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container volume-snapshot-controller Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(b6b28b8a-55e3-411f-8ff1-7da0eec83766) Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 20:05:38.415 (51ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 20:05:38.415 Jan 28 20:05:38.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 20:05:38.457 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 20:05:38.457 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 20:05:38.457 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 20:05:38.457 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 20:05:38.457 STEP: Collecting events from namespace "reboot-3856". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 20:05:38.457 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/28/23 20:05:38.498 Jan 28 20:05:38.539: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 20:05:38.539: INFO: Jan 28 20:05:38.582: INFO: Logging node info for node bootstrap-e2e-master Jan 28 20:05:38.624: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 970b6f6f-4e1a-46c9-acbf-59a10a5407de 2158 0 2023-01-28 19:51:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-28 19:51:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-28 19:51:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:01:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:09 +0000 UTC,LastTransitionTime:2023-01-28 19:51:09 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:01:38 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:01:38 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:01:38 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:01:38 +0000 UTC,LastTransitionTime:2023-01-28 19:51:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.117.50,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3a4f647927569fb58286b9195c204539,SystemUUID:3a4f6479-2756-9fb5-8286-b9195c204539,BootID:8ef6f2d0-a90b-49fd-85d7-23425f9c3021,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:57552182,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:05:38.625: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 28 20:05:38.671: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 28 20:05:38.727: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-28 19:50:19 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:38.727: INFO: Container kube-scheduler ready: true, restart count 2 Jan 28 20:05:38.727: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-28 19:50:19 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:38.727: INFO: Container etcd-container ready: true, restart count 1 Jan 28 20:05:38.727: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-28 19:50:19 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:38.727: INFO: Container konnectivity-server-container ready: true, restart count 5 Jan 28 20:05:38.727: INFO: metadata-proxy-v0.1-4b9h5 started at 2023-01-28 19:51:06 +0000 UTC (0+2 container statuses recorded) Jan 28 20:05:38.727: INFO: Container metadata-proxy ready: true, restart count 0 Jan 28 20:05:38.727: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 28 20:05:38.727: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-28 19:50:19 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:38.727: INFO: Container kube-controller-manager ready: true, restart count 5 Jan 28 20:05:38.727: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-28 19:50:19 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:38.727: INFO: Container etcd-container ready: true, restart count 2 Jan 28 20:05:38.727: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-28 19:50:19 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:38.727: INFO: Container kube-apiserver ready: true, restart count 0 Jan 28 20:05:38.727: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-28 19:50:36 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:38.727: INFO: Container kube-addon-manager ready: true, restart count 1 Jan 28 20:05:38.727: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-28 19:50:36 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:38.727: INFO: Container l7-lb-controller ready: true, restart count 3 Jan 28 20:05:38.907: INFO: Latency metrics for node bootstrap-e2e-master Jan 28 20:05:38.907: INFO: Logging node info for node bootstrap-e2e-minion-group-0n1r Jan 28 20:05:38.949: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0n1r 46df1b17-a913-4228-816e-be74f36b3df3 2697 0 2023-01-28 19:51:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0n1r kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:02:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 20:05:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-minion-group-0n1r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 20:05:16 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 20:05:16 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 20:05:16 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 20:05:16 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 20:05:16 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 20:05:16 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 20:05:16 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:23 +0000 UTC,LastTransitionTime:2023-01-28 19:51:23 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:37 +0000 UTC,LastTransitionTime:2023-01-28 20:02:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:37 +0000 UTC,LastTransitionTime:2023-01-28 20:02:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:37 +0000 UTC,LastTransitionTime:2023-01-28 20:02:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:02:37 +0000 UTC,LastTransitionTime:2023-01-28 20:02:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.127.122.120,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0n1r.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0n1r.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:270d4de2627654ef8c167cb0cf2b2d0a,SystemUUID:270d4de2-6276-54ef-8c16-7cb0cf2b2d0a,BootID:ae0c19ff-aa1d-4907-bca0-33ead0657727,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:05:38.950: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-0n1r Jan 28 20:05:38.995: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0n1r Jan 28 20:05:39.054: INFO: kube-proxy-bootstrap-e2e-minion-group-0n1r started at 2023-01-28 19:51:05 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.054: INFO: Container kube-proxy ready: true, restart count 4 Jan 28 20:05:39.054: INFO: metadata-proxy-v0.1-sdzdk started at 2023-01-28 19:51:06 +0000 UTC (0+2 container statuses recorded) Jan 28 20:05:39.054: INFO: Container metadata-proxy ready: true, restart count 1 Jan 28 20:05:39.054: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 28 20:05:39.054: INFO: konnectivity-agent-xvpcb started at 2023-01-28 19:51:23 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.054: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 28 20:05:39.054: INFO: metrics-server-v0.5.2-867b8754b9-zddjc started at 2023-01-28 19:51:46 +0000 UTC (0+2 container statuses recorded) Jan 28 20:05:39.054: INFO: Container metrics-server ready: false, restart count 6 Jan 28 20:05:39.054: INFO: Container metrics-server-nanny ready: false, restart count 6 Jan 28 20:05:39.214: INFO: Latency metrics for node bootstrap-e2e-minion-group-0n1r Jan 28 20:05:39.214: INFO: Logging node info for node bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:39.255: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-g3s5 1a727c84-81d4-4cc8-ad06-17830501909f 2314 0 2023-01-28 19:51:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-g3s5 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-01-28 20:00:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:02:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-minion-group-g3s5,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:23 +0000 UTC,LastTransitionTime:2023-01-28 19:51:23 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.145.35.125,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-g3s5.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-g3s5.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:79d547ef2c0f438965bed79c8c4eb57b,SystemUUID:79d547ef-2c0f-4389-65be-d79c8c4eb57b,BootID:6e605608-983d-4a6d-accb-1ee26169e2b6,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:05:39.256: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:39.333: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:39.424: INFO: volume-snapshot-controller-0 started at 2023-01-28 19:51:23 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.424: INFO: Container volume-snapshot-controller ready: false, restart count 9 Jan 28 20:05:39.424: INFO: coredns-6846b5b5f-zkf5q started at 2023-01-28 19:51:23 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.424: INFO: Container coredns ready: false, restart count 7 Jan 28 20:05:39.424: INFO: metadata-proxy-v0.1-nsst5 started at 2023-01-28 19:51:06 +0000 UTC (0+2 container statuses recorded) Jan 28 20:05:39.424: INFO: Container metadata-proxy ready: true, restart count 1 Jan 28 20:05:39.424: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 28 20:05:39.424: INFO: konnectivity-agent-qb4t9 started at 2023-01-28 19:51:23 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.424: INFO: Container konnectivity-agent ready: false, restart count 6 Jan 28 20:05:39.424: INFO: kube-proxy-bootstrap-e2e-minion-group-g3s5 started at 2023-01-28 19:51:05 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.424: INFO: Container kube-proxy ready: true, restart count 4 Jan 28 20:05:39.424: INFO: l7-default-backend-8549d69d99-dgcll started at 2023-01-28 19:51:23 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.424: INFO: Container default-http-backend ready: true, restart count 3 Jan 28 20:05:39.424: INFO: kube-dns-autoscaler-5f6455f985-tc6bx started at 2023-01-28 19:51:23 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.424: INFO: Container autoscaler ready: true, restart count 6 Jan 28 20:05:39.612: INFO: Latency metrics for node bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:39.612: INFO: Logging node info for node bootstrap-e2e-minion-group-mh3p Jan 28 20:05:39.661: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-mh3p 2d56d4de-a7bd-4a59-aa22-a6e8981cfd7e 2469 0 2023-01-28 19:51:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-mh3p kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:02:35 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 20:03:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-minion-group-mh3p,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:23 +0000 UTC,LastTransitionTime:2023-01-28 19:51:23 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.72.159,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-mh3p.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-mh3p.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5adcca49c54c440dcbf0f8686b780b6a,SystemUUID:5adcca49-c54c-440d-cbf0-f8686b780b6a,BootID:8cd287c9-c967-4df3-9019-7e693ad4e8a0,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:05:39.662: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-mh3p Jan 28 20:05:39.728: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-mh3p Jan 28 20:05:39.832: INFO: konnectivity-agent-6x7kx started at 2023-01-28 19:51:23 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.832: INFO: Container konnectivity-agent ready: false, restart count 7 Jan 28 20:05:39.832: INFO: coredns-6846b5b5f-5f95b started at 2023-01-28 19:51:34 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.832: INFO: Container coredns ready: true, restart count 6 Jan 28 20:05:39.832: INFO: kube-proxy-bootstrap-e2e-minion-group-mh3p started at 2023-01-28 19:51:04 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.832: INFO: Container kube-proxy ready: true, restart count 6 Jan 28 20:05:39.832: INFO: metadata-proxy-v0.1-45m2p started at 2023-01-28 19:51:05 +0000 UTC (0+2 container statuses recorded) Jan 28 20:05:39.832: INFO: Container metadata-proxy ready: true, restart count 1 Jan 28 20:05:39.832: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 28 20:05:40.004: INFO: Latency metrics for node bootstrap-e2e-minion-group-mh3p END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 20:05:40.004 (1.547s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 20:05:40.004 (1.547s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 20:05:40.004 STEP: Destroying namespace "reboot-3856" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 20:05:40.004 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 20:05:40.072 (68ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 20:05:40.074 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 20:05:40.077 (3ms)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 20:05:38.363from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 20:03:18.687 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 20:03:18.687 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 20:03:18.687 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 20:03:18.687 Jan 28 20:03:18.687: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 20:03:18.689 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 20:03:18.829 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 20:03:18.91 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 20:03:18.991 (304ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 20:03:18.991 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 20:03:18.991 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/28/23 20:03:18.991 Jan 28 20:03:19.086: INFO: Getting bootstrap-e2e-minion-group-g3s5 Jan 28 20:03:19.086: INFO: Getting bootstrap-e2e-minion-group-mh3p Jan 28 20:03:19.086: INFO: Getting bootstrap-e2e-minion-group-0n1r Jan 28 20:03:19.161: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-mh3p condition Ready to be true Jan 28 20:03:19.161: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-0n1r condition Ready to be true Jan 28 20:03:19.161: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-g3s5 condition Ready to be true Jan 28 20:03:19.206: INFO: Node bootstrap-e2e-minion-group-mh3p has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-45m2p kube-proxy-bootstrap-e2e-minion-group-mh3p] Jan 28 20:03:19.206: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-45m2p kube-proxy-bootstrap-e2e-minion-group-mh3p] Jan 28 20:03:19.206: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-mh3p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:03:19.207: INFO: Node bootstrap-e2e-minion-group-0n1r has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-sdzdk kube-proxy-bootstrap-e2e-minion-group-0n1r] Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-sdzdk kube-proxy-bootstrap-e2e-minion-group-0n1r] Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-0n1r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:03:19.207: INFO: Node bootstrap-e2e-minion-group-g3s5 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5 volume-snapshot-controller-0] Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5 volume-snapshot-controller-0] Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-45m2p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-sdzdk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-tc6bx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-g3s5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:03:19.207: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-nsst5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:03:19.252: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-mh3p": Phase="Running", Reason="", readiness=true. Elapsed: 45.274217ms Jan 28 20:03:19.252: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-mh3p" satisfied condition "running and ready, or succeeded" Jan 28 20:03:19.254: INFO: Pod "metadata-proxy-v0.1-nsst5": Phase="Running", Reason="", readiness=true. Elapsed: 47.071294ms Jan 28 20:03:19.254: INFO: Pod "metadata-proxy-v0.1-nsst5" satisfied condition "running and ready, or succeeded" Jan 28 20:03:19.255: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.184527ms Jan 28 20:03:19.255: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-g3s5' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 19:51:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:02:44 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:02:44 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 19:51:23 +0000 UTC }] Jan 28 20:03:19.255: INFO: Pod "kube-dns-autoscaler-5f6455f985-tc6bx": Phase="Running", Reason="", readiness=true. Elapsed: 48.112923ms Jan 28 20:03:19.255: INFO: Pod "kube-dns-autoscaler-5f6455f985-tc6bx" satisfied condition "running and ready, or succeeded" Jan 28 20:03:19.257: INFO: Pod "metadata-proxy-v0.1-sdzdk": Phase="Running", Reason="", readiness=true. Elapsed: 50.60158ms Jan 28 20:03:19.257: INFO: Pod "metadata-proxy-v0.1-sdzdk" satisfied condition "running and ready, or succeeded" Jan 28 20:03:19.258: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g3s5": Phase="Running", Reason="", readiness=true. Elapsed: 50.561438ms Jan 28 20:03:19.258: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g3s5" satisfied condition "running and ready, or succeeded" Jan 28 20:03:19.258: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0n1r": Phase="Running", Reason="", readiness=true. Elapsed: 51.060258ms Jan 28 20:03:19.258: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0n1r" satisfied condition "running and ready, or succeeded" Jan 28 20:03:19.258: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-sdzdk kube-proxy-bootstrap-e2e-minion-group-0n1r] Jan 28 20:03:19.258: INFO: Getting external IP address for bootstrap-e2e-minion-group-0n1r Jan 28 20:03:19.258: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-0n1r(34.127.122.120:22) Jan 28 20:03:19.258: INFO: Pod "metadata-proxy-v0.1-45m2p": Phase="Running", Reason="", readiness=true. Elapsed: 50.874234ms Jan 28 20:03:19.258: INFO: Pod "metadata-proxy-v0.1-45m2p" satisfied condition "running and ready, or succeeded" Jan 28 20:03:19.258: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-45m2p kube-proxy-bootstrap-e2e-minion-group-mh3p] Jan 28 20:03:19.258: INFO: Getting external IP address for bootstrap-e2e-minion-group-mh3p Jan 28 20:03:19.258: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-mh3p(34.168.72.159:22) Jan 28 20:03:19.773: INFO: ssh prow@34.127.122.120:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 20:03:19.773: INFO: ssh prow@34.127.122.120:22: stdout: "" Jan 28 20:03:19.773: INFO: ssh prow@34.127.122.120:22: stderr: "" Jan 28 20:03:19.773: INFO: ssh prow@34.127.122.120:22: exit code: 0 Jan 28 20:03:19.773: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-0n1r condition Ready to be false Jan 28 20:03:19.777: INFO: ssh prow@34.168.72.159:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 20:03:19.777: INFO: ssh prow@34.168.72.159:22: stdout: "" Jan 28 20:03:19.777: INFO: ssh prow@34.168.72.159:22: stderr: "" Jan 28 20:03:19.777: INFO: ssh prow@34.168.72.159:22: exit code: 0 Jan 28 20:03:19.777: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-mh3p condition Ready to be false Jan 28 20:03:19.815: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:19.818: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:21.299: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.092494064s Jan 28 20:03:21.299: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-g3s5' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 19:51:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:02:44 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:02:44 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 19:51:23 +0000 UTC }] Jan 28 20:03:21.863: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:21.864: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:23.297: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.090281755s Jan 28 20:03:23.297: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-g3s5' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 19:51:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:02:44 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:02:44 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 19:51:23 +0000 UTC }] Jan 28 20:03:23.909: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:23.909: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:25.299: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 6.092278827s Jan 28 20:03:25.299: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 20:03:25.299: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5 volume-snapshot-controller-0] Jan 28 20:03:25.299: INFO: Getting external IP address for bootstrap-e2e-minion-group-g3s5 Jan 28 20:03:25.299: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-g3s5(34.145.35.125:22) Jan 28 20:03:25.818: INFO: ssh prow@34.145.35.125:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 20:03:25.818: INFO: ssh prow@34.145.35.125:22: stdout: "" Jan 28 20:03:25.818: INFO: ssh prow@34.145.35.125:22: stderr: "" Jan 28 20:03:25.818: INFO: ssh prow@34.145.35.125:22: exit code: 0 Jan 28 20:03:25.818: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-g3s5 condition Ready to be false Jan 28 20:03:25.860: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:25.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:25.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:27.903: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:27.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:27.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:29.946: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:30.042: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:30.042: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:31.989: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:32.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:32.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:34.032: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:34.130: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:34.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:36.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:36.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:36.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:38.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:38.217: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:38.217: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:40.166: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:40.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:40.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:42.213: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:42.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:42.308: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:44.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:44.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:44.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:46.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:46.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:46.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:48.342: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:48.440: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:48.441: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:50.386: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:50.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:50.485: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:52.428: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:52.527: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:52.528: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:54.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:54.568: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:54.571: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:56.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:56.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:56.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:58.557: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:58.654: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:03:58.657: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:00.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:00.697: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:00.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:02.704: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:02.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:02.743: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:04.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:04.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:04.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:06.790: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:06.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:06.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:08.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:08.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:08.885: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:10.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:10.933: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:10.933: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:12.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:12.977: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:12.978: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:14.962: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:15.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:15.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:17.005: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:17.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:17.065: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:19.048: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:19.108: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:19.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:21.090: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:21.152: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:21.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:23.134: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:23.197: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:23.199: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:25.179: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:25.241: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:25.243: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:27.223: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:27.284: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:27.286: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:29.266: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:29.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:29.330: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:31.310: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:31.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:31.374: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:33.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:33.418: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:33.418: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:35.400: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:35.462: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:35.463: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:37.445: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:37.506: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:37.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:39.489: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:39.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:39.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:41.532: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:41.594: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:41.595: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:43.576: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:43.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:43.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:45.618: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:45.682: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:45.682: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:47.661: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:47.749: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:47.749: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:49.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:49.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:49.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:51.755: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:51.863: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:51.864: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:53.798: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:53.906: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:53.907: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:55.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:55.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:55.950: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:57.903: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:57.993: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:57.993: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:04:59.946: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:00.037: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:00.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:01.990: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:02.083: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:02.085: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:04.036: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:04.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:04.132: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:06.077: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:06.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:06.175: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:08.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:08.216: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:08.218: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:10.165: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:10.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:10.263: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:12.208: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:12.304: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:12.306: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:14.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:14.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:14.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:16.295: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:16.392: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:16.393: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:18.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:18.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:18.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:20.381: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:20.435: INFO: Node bootstrap-e2e-minion-group-mh3p didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:05:20.437: INFO: Node bootstrap-e2e-minion-group-0n1r didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:05:22.424: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:24.467: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:05:26.468: INFO: Node bootstrap-e2e-minion-group-g3s5 didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:05:26.468: INFO: Node bootstrap-e2e-minion-group-0n1r failed reboot test. Jan 28 20:05:26.468: INFO: Node bootstrap-e2e-minion-group-g3s5 failed reboot test. Jan 28 20:05:26.468: INFO: Node bootstrap-e2e-minion-group-mh3p failed reboot test. Jan 28 20:05:26.468: INFO: Executing termination hook on nodes Jan 28 20:05:26.468: INFO: Getting external IP address for bootstrap-e2e-minion-group-0n1r Jan 28 20:05:26.468: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-0n1r(34.127.122.120:22) Jan 28 20:05:34.286: INFO: ssh prow@34.127.122.120:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 20:05:34.286: INFO: ssh prow@34.127.122.120:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 20:03:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 20:05:34.286: INFO: ssh prow@34.127.122.120:22: stderr: "" Jan 28 20:05:34.286: INFO: ssh prow@34.127.122.120:22: exit code: 0 Jan 28 20:05:34.286: INFO: Getting external IP address for bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:34.286: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-g3s5(34.145.35.125:22) Jan 28 20:05:37.838: INFO: ssh prow@34.145.35.125:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 20:05:37.838: INFO: ssh prow@34.145.35.125:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 20:03:35 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 20:05:37.838: INFO: ssh prow@34.145.35.125:22: stderr: "" Jan 28 20:05:37.838: INFO: ssh prow@34.145.35.125:22: exit code: 0 Jan 28 20:05:37.838: INFO: Getting external IP address for bootstrap-e2e-minion-group-mh3p Jan 28 20:05:37.838: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-mh3p(34.168.72.159:22) Jan 28 20:05:38.363: INFO: ssh prow@34.168.72.159:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 20:05:38.363: INFO: ssh prow@34.168.72.159:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 20:03:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 20:05:38.363: INFO: ssh prow@34.168.72.159:22: stderr: "" Jan 28 20:05:38.363: INFO: ssh prow@34.168.72.159:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 20:05:38.363 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/28/23 20:05:38.363 (2m19.372s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 20:05:38.363 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 20:05:38.363 Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-5f95b to bootstrap-e2e-minion-group-mh3p Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 988.64865ms (988.660887ms including waiting) Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-5f95b Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-5f95b_kube-system(d963f1ba-8d39-4169-912a-3ea2b305ba4d) Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: Get "http://10.64.1.11:8181/ready": dial tcp 10.64.1.11:8181: connect: connection refused Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-zkf5q to bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 4.754015323s (4.754025827s including waiting) Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-zkf5q_kube-system(bc56bd34-3571-4e4b-abe7-beb82134f4e9) Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-zkf5q Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:8181/ready": dial tcp 10.64.3.24:8181: connect: connection refused Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container coredns Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-zkf5q_kube-system(bc56bd34-3571-4e4b-abe7-beb82134f4e9) Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.30:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.34:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-zkf5q Jan 28 20:05:38.414: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-5f95b Jan 28 20:05:38.414: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 20:05:38.414: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 20:05:38.414: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 20:05:38.414: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 20:05:38.414: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 20:05:38.414: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 20:05:38.414: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(29ec3e483e58679ee5f59a6031c5e501) Jan 28 20:05:38.414: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 20:05:38.414: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 20:05:38.414: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 20:05:38.414: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_513c5 became leader Jan 28 20:05:38.414: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1b6de became leader Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-6x7kx to bootstrap-e2e-minion-group-mh3p Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 638.516592ms (638.533876ms including waiting) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6x7kx_kube-system(ed70439e-4bcd-45f3-ab80-c3443614cb7f) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6x7kx_kube-system(ed70439e-4bcd-45f3-ab80-c3443614cb7f) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Liveness probe failed: Get "http://10.64.1.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-qb4t9 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.519410591s (2.519418935s including waiting) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-qb4t9_kube-system(c535b342-76b5-479d-8f04-e96ca247dfe5) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Liveness probe failed: Get "http://10.64.3.26:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Failed: Error: failed to get sandbox container task: no running task found: task cc5844e86e91665c11906665c81f3d4c5211312c2df4be494c37e0261f046d15 not found: not found Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-qb4t9_kube-system(c535b342-76b5-479d-8f04-e96ca247dfe5) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Liveness probe failed: Get "http://10.64.3.33:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-xvpcb to bootstrap-e2e-minion-group-0n1r Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 636.231986ms (636.24567ms including waiting) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": dial tcp 10.64.2.2:8093: connect: connection refused Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.8:8093/healthz": dial tcp 10.64.2.8:8093: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.8:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container konnectivity-agent Jan 28 20:05:38.414: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-xvpcb_kube-system(989c550e-f120-4c1b-9c3a-6df4b3fdde4c) Jan 28 20:05:38.414: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-qb4t9 Jan 28 20:05:38.414: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-xvpcb Jan 28 20:05:38.414: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-6x7kx Jan 28 20:05:38.414: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 20:05:38.414: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 20:05:38.414: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 28 20:05:38.414: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 28 20:05:38.414: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 28 20:05:38.414: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 28 20:05:38.414: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 28 20:05:38.414: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 28 20:05:38.414: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 28 20:05:38.414: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:05:38.414: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:05:38.414: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 28 20:05:38.414: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 28 20:05:38.414: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 28 20:05:38.414: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(f70ce176158303a9ebd031d7e3b6127a) Jan 28 20:05:38.414: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_3195f2fa-43b4-44c6-99b9-48340126a997 became leader Jan 28 20:05:38.414: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_79df5a90-5f1c-4226-91be-48b6f9dbf1b4 became leader Jan 28 20:05:38.414: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_de5cb362-ceae-4fe2-9999-2c22c1c438c2 became leader Jan 28 20:05:38.414: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2052b0a5-4de3-41f7-abae-084298efc321 became leader Jan 28 20:05:38.414: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_35a816ba-3468-4255-96ae-1484bc9888a9 became leader Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-tc6bx to bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 5.225574521s (5.225582217s including waiting) Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container autoscaler Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container autoscaler Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container autoscaler Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-tc6bx_kube-system(68e7acff-d47c-41a3-999e-81f6e6886b77) Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-tc6bx Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container autoscaler Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container autoscaler Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container autoscaler Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-tc6bx_kube-system(68e7acff-d47c-41a3-999e-81f6e6886b77) Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-tc6bx Jan 28 20:05:38.414: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-0n1r_kube-system(9b011e80d8dc05f3f14727717fa821a7) Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-g3s5_kube-system(926ffa386cd1d6d2268581c1ed0b2f8c) Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-g3s5_kube-system(926ffa386cd1d6d2268581c1ed0b2f8c) Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-mh3p_kube-system(b150875e2fb427d0806b8243d6a9b58f) Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container kube-proxy Jan 28 20:05:38.414: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-mh3p_kube-system(b150875e2fb427d0806b8243d6a9b58f) Jan 28 20:05:38.414: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:05:38.414: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 28 20:05:38.414: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 28 20:05:38.414: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 28 20:05:38.414: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(51babbd1f81b742b53c210ccd4aba348) Jan 28 20:05:38.414: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_6d3679c9-8b91-439b-8dd5-7d1b052b0f95 became leader Jan 28 20:05:38.414: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_97f512eb-1061-47dc-9e27-98f52ceebe45 became leader Jan 28 20:05:38.414: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_75e50ff1-aee4-4d42-a84f-b94251206449 became leader Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-dgcll to bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 2.30054303s (2.300570468s including waiting) Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container default-http-backend Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container default-http-backend Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-dgcll Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container default-http-backend Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container default-http-backend Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Liveness probe failed: Get "http://10.64.3.27:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 28 20:05:38.414: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-dgcll Jan 28 20:05:38.414: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 28 20:05:38.414: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 28 20:05:38.414: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 28 20:05:38.414: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 28 20:05:38.414: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-45m2p to bootstrap-e2e-minion-group-mh3p Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 847.414224ms (847.440914ms including waiting) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.820556539s (1.820574424s including waiting) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-4b9h5 to bootstrap-e2e-master Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 880.932728ms (880.940631ms including waiting) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.873485565s (1.873503664s including waiting) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-nsst5 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 663.380312ms (663.388707ms including waiting) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.719868155s (1.719885142s including waiting) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-sdzdk to bootstrap-e2e-minion-group-0n1r Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 712.939789ms (712.956274ms including waiting) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.695636692s (1.695660104s including waiting) Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metadata-proxy Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container prometheus-to-sd-exporter Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-4b9h5 Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-45m2p Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-sdzdk Jan 28 20:05:38.414: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-nsst5 Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-lwrsb to bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.471766127s (3.471785385s including waiting) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.674813094s (2.674841129s including waiting) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-lwrsb Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-lwrsb Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-zddjc to bootstrap-e2e-minion-group-0n1r Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.258017443s (1.258032513s including waiting) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 935.578053ms (935.586846ms including waiting) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": dial tcp 10.64.2.3:10250: connect: connection refused Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": dial tcp 10.64.2.3:10250: connect: connection refused Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-zddjc Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.7:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.7:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server-nanny Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Container metrics-server failed liveness probe, will be restarted Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Failed: Error: failed to get sandbox container task: no running task found: task 93118149c87c74675ce0d5095e2845a398f21d95fd8ae04827f4f38ded7adf60 not found: not found Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-zddjc_kube-system(75bf20cf-455a-48e7-8784-bd1f4f74d211) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-zddjc_kube-system(75bf20cf-455a-48e7-8784-bd1f4f74d211) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.11:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-zddjc Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 28 20:05:38.414: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 1.912364661s (1.912373502s including waiting) Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container volume-snapshot-controller Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container volume-snapshot-controller Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container volume-snapshot-controller Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(b6b28b8a-55e3-411f-8ff1-7da0eec83766) Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container volume-snapshot-controller Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container volume-snapshot-controller Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container volume-snapshot-controller Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(b6b28b8a-55e3-411f-8ff1-7da0eec83766) Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:05:38.414: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 20:05:38.415 (51ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 20:05:38.415 Jan 28 20:05:38.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 20:05:38.457 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 20:05:38.457 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 20:05:38.457 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 20:05:38.457 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 20:05:38.457 STEP: Collecting events from namespace "reboot-3856". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 20:05:38.457 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/28/23 20:05:38.498 Jan 28 20:05:38.539: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 20:05:38.539: INFO: Jan 28 20:05:38.582: INFO: Logging node info for node bootstrap-e2e-master Jan 28 20:05:38.624: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 970b6f6f-4e1a-46c9-acbf-59a10a5407de 2158 0 2023-01-28 19:51:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-28 19:51:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-28 19:51:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:01:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:09 +0000 UTC,LastTransitionTime:2023-01-28 19:51:09 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:01:38 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:01:38 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:01:38 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:01:38 +0000 UTC,LastTransitionTime:2023-01-28 19:51:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.117.50,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3a4f647927569fb58286b9195c204539,SystemUUID:3a4f6479-2756-9fb5-8286-b9195c204539,BootID:8ef6f2d0-a90b-49fd-85d7-23425f9c3021,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:57552182,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:05:38.625: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 28 20:05:38.671: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 28 20:05:38.727: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-28 19:50:19 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:38.727: INFO: Container kube-scheduler ready: true, restart count 2 Jan 28 20:05:38.727: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-28 19:50:19 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:38.727: INFO: Container etcd-container ready: true, restart count 1 Jan 28 20:05:38.727: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-28 19:50:19 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:38.727: INFO: Container konnectivity-server-container ready: true, restart count 5 Jan 28 20:05:38.727: INFO: metadata-proxy-v0.1-4b9h5 started at 2023-01-28 19:51:06 +0000 UTC (0+2 container statuses recorded) Jan 28 20:05:38.727: INFO: Container metadata-proxy ready: true, restart count 0 Jan 28 20:05:38.727: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 28 20:05:38.727: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-28 19:50:19 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:38.727: INFO: Container kube-controller-manager ready: true, restart count 5 Jan 28 20:05:38.727: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-28 19:50:19 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:38.727: INFO: Container etcd-container ready: true, restart count 2 Jan 28 20:05:38.727: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-28 19:50:19 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:38.727: INFO: Container kube-apiserver ready: true, restart count 0 Jan 28 20:05:38.727: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-28 19:50:36 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:38.727: INFO: Container kube-addon-manager ready: true, restart count 1 Jan 28 20:05:38.727: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-28 19:50:36 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:38.727: INFO: Container l7-lb-controller ready: true, restart count 3 Jan 28 20:05:38.907: INFO: Latency metrics for node bootstrap-e2e-master Jan 28 20:05:38.907: INFO: Logging node info for node bootstrap-e2e-minion-group-0n1r Jan 28 20:05:38.949: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0n1r 46df1b17-a913-4228-816e-be74f36b3df3 2697 0 2023-01-28 19:51:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0n1r kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:02:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 20:05:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-minion-group-0n1r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 20:05:16 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 20:05:16 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 20:05:16 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 20:05:16 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 20:05:16 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 20:05:16 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 20:05:16 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:23 +0000 UTC,LastTransitionTime:2023-01-28 19:51:23 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:37 +0000 UTC,LastTransitionTime:2023-01-28 20:02:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:37 +0000 UTC,LastTransitionTime:2023-01-28 20:02:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:37 +0000 UTC,LastTransitionTime:2023-01-28 20:02:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:02:37 +0000 UTC,LastTransitionTime:2023-01-28 20:02:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.127.122.120,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0n1r.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0n1r.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:270d4de2627654ef8c167cb0cf2b2d0a,SystemUUID:270d4de2-6276-54ef-8c16-7cb0cf2b2d0a,BootID:ae0c19ff-aa1d-4907-bca0-33ead0657727,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:05:38.950: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-0n1r Jan 28 20:05:38.995: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0n1r Jan 28 20:05:39.054: INFO: kube-proxy-bootstrap-e2e-minion-group-0n1r started at 2023-01-28 19:51:05 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.054: INFO: Container kube-proxy ready: true, restart count 4 Jan 28 20:05:39.054: INFO: metadata-proxy-v0.1-sdzdk started at 2023-01-28 19:51:06 +0000 UTC (0+2 container statuses recorded) Jan 28 20:05:39.054: INFO: Container metadata-proxy ready: true, restart count 1 Jan 28 20:05:39.054: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 28 20:05:39.054: INFO: konnectivity-agent-xvpcb started at 2023-01-28 19:51:23 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.054: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 28 20:05:39.054: INFO: metrics-server-v0.5.2-867b8754b9-zddjc started at 2023-01-28 19:51:46 +0000 UTC (0+2 container statuses recorded) Jan 28 20:05:39.054: INFO: Container metrics-server ready: false, restart count 6 Jan 28 20:05:39.054: INFO: Container metrics-server-nanny ready: false, restart count 6 Jan 28 20:05:39.214: INFO: Latency metrics for node bootstrap-e2e-minion-group-0n1r Jan 28 20:05:39.214: INFO: Logging node info for node bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:39.255: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-g3s5 1a727c84-81d4-4cc8-ad06-17830501909f 2314 0 2023-01-28 19:51:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-g3s5 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-01-28 20:00:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:02:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-minion-group-g3s5,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:23 +0000 UTC,LastTransitionTime:2023-01-28 19:51:23 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.145.35.125,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-g3s5.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-g3s5.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:79d547ef2c0f438965bed79c8c4eb57b,SystemUUID:79d547ef-2c0f-4389-65be-d79c8c4eb57b,BootID:6e605608-983d-4a6d-accb-1ee26169e2b6,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:05:39.256: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:39.333: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:39.424: INFO: volume-snapshot-controller-0 started at 2023-01-28 19:51:23 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.424: INFO: Container volume-snapshot-controller ready: false, restart count 9 Jan 28 20:05:39.424: INFO: coredns-6846b5b5f-zkf5q started at 2023-01-28 19:51:23 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.424: INFO: Container coredns ready: false, restart count 7 Jan 28 20:05:39.424: INFO: metadata-proxy-v0.1-nsst5 started at 2023-01-28 19:51:06 +0000 UTC (0+2 container statuses recorded) Jan 28 20:05:39.424: INFO: Container metadata-proxy ready: true, restart count 1 Jan 28 20:05:39.424: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 28 20:05:39.424: INFO: konnectivity-agent-qb4t9 started at 2023-01-28 19:51:23 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.424: INFO: Container konnectivity-agent ready: false, restart count 6 Jan 28 20:05:39.424: INFO: kube-proxy-bootstrap-e2e-minion-group-g3s5 started at 2023-01-28 19:51:05 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.424: INFO: Container kube-proxy ready: true, restart count 4 Jan 28 20:05:39.424: INFO: l7-default-backend-8549d69d99-dgcll started at 2023-01-28 19:51:23 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.424: INFO: Container default-http-backend ready: true, restart count 3 Jan 28 20:05:39.424: INFO: kube-dns-autoscaler-5f6455f985-tc6bx started at 2023-01-28 19:51:23 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.424: INFO: Container autoscaler ready: true, restart count 6 Jan 28 20:05:39.612: INFO: Latency metrics for node bootstrap-e2e-minion-group-g3s5 Jan 28 20:05:39.612: INFO: Logging node info for node bootstrap-e2e-minion-group-mh3p Jan 28 20:05:39.661: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-mh3p 2d56d4de-a7bd-4a59-aa22-a6e8981cfd7e 2469 0 2023-01-28 19:51:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-mh3p kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:02:35 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 20:03:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-minion-group-mh3p,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:23 +0000 UTC,LastTransitionTime:2023-01-28 19:51:23 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.72.159,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-mh3p.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-mh3p.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5adcca49c54c440dcbf0f8686b780b6a,SystemUUID:5adcca49-c54c-440d-cbf0-f8686b780b6a,BootID:8cd287c9-c967-4df3-9019-7e693ad4e8a0,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:05:39.662: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-mh3p Jan 28 20:05:39.728: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-mh3p Jan 28 20:05:39.832: INFO: konnectivity-agent-6x7kx started at 2023-01-28 19:51:23 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.832: INFO: Container konnectivity-agent ready: false, restart count 7 Jan 28 20:05:39.832: INFO: coredns-6846b5b5f-5f95b started at 2023-01-28 19:51:34 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.832: INFO: Container coredns ready: true, restart count 6 Jan 28 20:05:39.832: INFO: kube-proxy-bootstrap-e2e-minion-group-mh3p started at 2023-01-28 19:51:04 +0000 UTC (0+1 container statuses recorded) Jan 28 20:05:39.832: INFO: Container kube-proxy ready: true, restart count 6 Jan 28 20:05:39.832: INFO: metadata-proxy-v0.1-45m2p started at 2023-01-28 19:51:05 +0000 UTC (0+2 container statuses recorded) Jan 28 20:05:39.832: INFO: Container metadata-proxy ready: true, restart count 1 Jan 28 20:05:39.832: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 28 20:05:40.004: INFO: Latency metrics for node bootstrap-e2e-minion-group-mh3p END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 20:05:40.004 (1.547s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 20:05:40.004 (1.547s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 20:05:40.004 STEP: Destroying namespace "reboot-3856" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 20:05:40.004 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 20:05:40.072 (68ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 20:05:40.074 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 20:05:40.077 (3ms)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 20:02:42.785from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 20:00:24.067 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 20:00:24.067 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 20:00:24.067 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 20:00:24.067 Jan 28 20:00:24.067: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 20:00:24.068 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 20:00:24.193 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 20:00:24.273 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 20:00:24.354 (287ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 20:00:24.354 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 20:00:24.354 (0s) > Enter [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/28/23 20:00:24.354 Jan 28 20:00:24.448: INFO: Getting bootstrap-e2e-minion-group-g3s5 Jan 28 20:00:24.449: INFO: Getting bootstrap-e2e-minion-group-mh3p Jan 28 20:00:24.449: INFO: Getting bootstrap-e2e-minion-group-0n1r Jan 28 20:00:24.524: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-0n1r condition Ready to be true Jan 28 20:00:24.524: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-g3s5 condition Ready to be true Jan 28 20:00:24.524: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-mh3p condition Ready to be true Jan 28 20:00:24.569: INFO: Node bootstrap-e2e-minion-group-0n1r has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-0n1r metadata-proxy-v0.1-sdzdk] Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-0n1r metadata-proxy-v0.1-sdzdk] Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-sdzdk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:00:24.569: INFO: Node bootstrap-e2e-minion-group-g3s5 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5 volume-snapshot-controller-0] Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5 volume-snapshot-controller-0] Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:00:24.569: INFO: Node bootstrap-e2e-minion-group-mh3p has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-mh3p metadata-proxy-v0.1-45m2p] Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-mh3p metadata-proxy-v0.1-45m2p] Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-45m2p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-0n1r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-tc6bx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-g3s5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:00:24.570: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-nsst5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:00:24.570: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-mh3p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:00:24.615: INFO: Pod "kube-dns-autoscaler-5f6455f985-tc6bx": Phase="Running", Reason="", readiness=true. Elapsed: 45.714485ms Jan 28 20:00:24.615: INFO: Pod "kube-dns-autoscaler-5f6455f985-tc6bx" satisfied condition "running and ready, or succeeded" Jan 28 20:00:24.616: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 46.596568ms Jan 28 20:00:24.616: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 20:00:24.617: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g3s5": Phase="Running", Reason="", readiness=true. Elapsed: 47.414966ms Jan 28 20:00:24.617: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g3s5" satisfied condition "running and ready, or succeeded" Jan 28 20:00:24.617: INFO: Pod "metadata-proxy-v0.1-45m2p": Phase="Running", Reason="", readiness=true. Elapsed: 47.835449ms Jan 28 20:00:24.617: INFO: Pod "metadata-proxy-v0.1-45m2p" satisfied condition "running and ready, or succeeded" Jan 28 20:00:24.617: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0n1r": Phase="Running", Reason="", readiness=true. Elapsed: 47.778778ms Jan 28 20:00:24.617: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0n1r" satisfied condition "running and ready, or succeeded" Jan 28 20:00:24.617: INFO: Pod "metadata-proxy-v0.1-sdzdk": Phase="Running", Reason="", readiness=true. Elapsed: 48.317228ms Jan 28 20:00:24.617: INFO: Pod "metadata-proxy-v0.1-sdzdk" satisfied condition "running and ready, or succeeded" Jan 28 20:00:24.617: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-0n1r metadata-proxy-v0.1-sdzdk] Jan 28 20:00:24.617: INFO: Getting external IP address for bootstrap-e2e-minion-group-0n1r Jan 28 20:00:24.617: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-0n1r(34.127.122.120:22) Jan 28 20:00:24.617: INFO: Pod "metadata-proxy-v0.1-nsst5": Phase="Running", Reason="", readiness=true. Elapsed: 47.693562ms Jan 28 20:00:24.617: INFO: Pod "metadata-proxy-v0.1-nsst5" satisfied condition "running and ready, or succeeded" Jan 28 20:00:24.617: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5 volume-snapshot-controller-0] Jan 28 20:00:24.617: INFO: Getting external IP address for bootstrap-e2e-minion-group-g3s5 Jan 28 20:00:24.617: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-g3s5(34.145.35.125:22) Jan 28 20:00:24.618: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-mh3p": Phase="Running", Reason="", readiness=true. Elapsed: 48.326316ms Jan 28 20:00:24.618: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-mh3p" satisfied condition "running and ready, or succeeded" Jan 28 20:00:24.618: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-mh3p metadata-proxy-v0.1-45m2p] Jan 28 20:00:24.618: INFO: Getting external IP address for bootstrap-e2e-minion-group-mh3p Jan 28 20:00:24.618: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-mh3p(34.168.72.159:22) Jan 28 20:00:25.159: INFO: ssh prow@34.168.72.159:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 28 20:00:25.159: INFO: ssh prow@34.168.72.159:22: stdout: "" Jan 28 20:00:25.159: INFO: ssh prow@34.168.72.159:22: stderr: "" Jan 28 20:00:25.159: INFO: ssh prow@34.168.72.159:22: exit code: 0 Jan 28 20:00:25.159: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-mh3p condition Ready to be false Jan 28 20:00:25.169: INFO: ssh prow@34.145.35.125:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 28 20:00:25.169: INFO: ssh prow@34.145.35.125:22: stdout: "" Jan 28 20:00:25.169: INFO: ssh prow@34.145.35.125:22: stderr: "" Jan 28 20:00:25.169: INFO: ssh prow@34.145.35.125:22: exit code: 0 Jan 28 20:00:25.169: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-g3s5 condition Ready to be false Jan 28 20:00:25.172: INFO: ssh prow@34.127.122.120:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 28 20:00:25.172: INFO: ssh prow@34.127.122.120:22: stdout: "" Jan 28 20:00:25.172: INFO: ssh prow@34.127.122.120:22: stderr: "" Jan 28 20:00:25.172: INFO: ssh prow@34.127.122.120:22: exit code: 0 Jan 28 20:00:25.172: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-0n1r condition Ready to be false Jan 28 20:00:25.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:25.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:25.214: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:27.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:27.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:27.260: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:29.291: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:29.298: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:29.302: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:31.333: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:31.340: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:31.346: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:33.376: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:33.382: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:33.389: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:35.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:35.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:35.432: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:37.464: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:37.469: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:37.475: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:39.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:39.511: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:39.517: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:41.549: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:41.553: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:41.559: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:43.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:43.596: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:43.601: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:45.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:45.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:45.643: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:47.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:47.681: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:47.689: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:49.723: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:49.724: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:49.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:51.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:51.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:51.774: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:53.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:53.810: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:53.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:55.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:55.853: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:55.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:57.892: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:57.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:57.901: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:59.934: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:59.937: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:59.944: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:01.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:01.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:01.986: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:04.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:04.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:04.029: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:06.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:06.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:06.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:08.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:08.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:08.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:10.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:10.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:10.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:12.214: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:12.215: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:12.215: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:14.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:14.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:14.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:16.309: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:16.309: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:16.309: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:18.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:18.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:18.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:20.402: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:20.402: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:20.403: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:22.448: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:22.448: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:22.448: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:24.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:24.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:24.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:26.540: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:26.540: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:26.540: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:28.586: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:28.586: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:28.586: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:30.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:30.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:30.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:32.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:32.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:32.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:34.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:34.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:34.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:36.777: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:36.777: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:36.778: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:38.824: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:38.824: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:38.824: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:40.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:40.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:40.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:42.914: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:42.914: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:42.914: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:44.958: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:44.958: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:44.958: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:47.002: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:47.003: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:47.003: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:49.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:49.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:49.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:51.094: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:51.094: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:51.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:53.143: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:53.143: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:53.143: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:55.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:55.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:55.190: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:57.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:57.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:57.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:59.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:59.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:59.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:01.328: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:01.328: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:01.328: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:03.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:03.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:03.374: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:05.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:05.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:05.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:07.466: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:07.466: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:07.466: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:09.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:09.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:09.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:11.558: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:11.558: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:11.558: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:13.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:13.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:13.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:15.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:15.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:15.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:17.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:17.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:17.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:19.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:19.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:19.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:21.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:21.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:21.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:23.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:23.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:23.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:25.832: INFO: Node bootstrap-e2e-minion-group-0n1r didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:02:25.832: INFO: Node bootstrap-e2e-minion-group-g3s5 didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:02:25.832: INFO: Node bootstrap-e2e-minion-group-mh3p didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:02:25.832: INFO: Node bootstrap-e2e-minion-group-0n1r failed reboot test. Jan 28 20:02:25.832: INFO: Node bootstrap-e2e-minion-group-g3s5 failed reboot test. Jan 28 20:02:25.832: INFO: Node bootstrap-e2e-minion-group-mh3p failed reboot test. Jan 28 20:02:25.833: INFO: Executing termination hook on nodes Jan 28 20:02:25.833: INFO: Getting external IP address for bootstrap-e2e-minion-group-0n1r Jan 28 20:02:25.833: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-0n1r(34.127.122.120:22) Jan 28 20:02:41.742: INFO: ssh prow@34.127.122.120:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 28 20:02:41.742: INFO: ssh prow@34.127.122.120:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 20:00:35 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 20:02:41.742: INFO: ssh prow@34.127.122.120:22: stderr: "" Jan 28 20:02:41.742: INFO: ssh prow@34.127.122.120:22: exit code: 0 Jan 28 20:02:41.742: INFO: Getting external IP address for bootstrap-e2e-minion-group-g3s5 Jan 28 20:02:41.742: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-g3s5(34.145.35.125:22) Jan 28 20:02:42.262: INFO: ssh prow@34.145.35.125:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 28 20:02:42.262: INFO: ssh prow@34.145.35.125:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 20:00:35 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 20:02:42.262: INFO: ssh prow@34.145.35.125:22: stderr: "" Jan 28 20:02:42.262: INFO: ssh prow@34.145.35.125:22: exit code: 0 Jan 28 20:02:42.262: INFO: Getting external IP address for bootstrap-e2e-minion-group-mh3p Jan 28 20:02:42.262: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-mh3p(34.168.72.159:22) Jan 28 20:02:42.784: INFO: ssh prow@34.168.72.159:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 28 20:02:42.784: INFO: ssh prow@34.168.72.159:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 20:00:35 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 20:02:42.784: INFO: ssh prow@34.168.72.159:22: stderr: "" Jan 28 20:02:42.784: INFO: ssh prow@34.168.72.159:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 20:02:42.785 < Exit [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/28/23 20:02:42.785 (2m18.431s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 20:02:42.785 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 20:02:42.785 Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-5f95b to bootstrap-e2e-minion-group-mh3p Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 988.64865ms (988.660887ms including waiting) Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-5f95b Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-5f95b_kube-system(d963f1ba-8d39-4169-912a-3ea2b305ba4d) Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: Get "http://10.64.1.11:8181/ready": dial tcp 10.64.1.11:8181: connect: connection refused Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-zkf5q to bootstrap-e2e-minion-group-g3s5 Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 4.754015323s (4.754025827s including waiting) Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-zkf5q_kube-system(bc56bd34-3571-4e4b-abe7-beb82134f4e9) Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-zkf5q Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:8181/ready": dial tcp 10.64.3.24:8181: connect: connection refused Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-zkf5q_kube-system(bc56bd34-3571-4e4b-abe7-beb82134f4e9) Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.30:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-zkf5q Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-5f95b Jan 28 20:02:42.842: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 20:02:42.842: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 20:02:42.842: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 20:02:42.842: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 20:02:42.842: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 20:02:42.842: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 20:02:42.842: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(29ec3e483e58679ee5f59a6031c5e501) Jan 28 20:02:42.842: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 20:02:42.842: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 20:02:42.842: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 20:02:42.842: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_513c5 became leader Jan 28 20:02:42.842: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1b6de became leader Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-6x7kx to bootstrap-e2e-minion-group-mh3p Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 638.516592ms (638.533876ms including waiting) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6x7kx_kube-system(ed70439e-4bcd-45f3-ab80-c3443614cb7f) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6x7kx_kube-system(ed70439e-4bcd-45f3-ab80-c3443614cb7f) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Liveness probe failed: Get "http://10.64.1.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-qb4t9 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.519410591s (2.519418935s including waiting) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-qb4t9_kube-system(c535b342-76b5-479d-8f04-e96ca247dfe5) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Liveness probe failed: Get "http://10.64.3.26:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-xvpcb to bootstrap-e2e-minion-group-0n1r Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 636.231986ms (636.24567ms including waiting) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": dial tcp 10.64.2.2:8093: connect: connection refused Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.8:8093/healthz": dial tcp 10.64.2.8:8093: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.8:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-qb4t9 Jan 28 20:02:42.842: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-xvpcb Jan 28 20:02:42.842: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-6x7kx Jan 28 20:02:42.842: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 20:02:42.842: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 20:02:42.842: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 28 20:02:42.842: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 28 20:02:42.842: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 28 20:02:42.842: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 28 20:02:42.842: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 28 20:02:42.842: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 28 20:02:42.842: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 28 20:02:42.842: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:02:42.842: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:02:42.842: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 28 20:02:42.842: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 28 20:02:42.842: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 28 20:02:42.842: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(f70ce176158303a9ebd031d7e3b6127a) Jan 28 20:02:42.842: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_3195f2fa-43b4-44c6-99b9-48340126a997 became leader Jan 28 20:02:42.842: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_79df5a90-5f1c-4226-91be-48b6f9dbf1b4 became leader Jan 28 20:02:42.842: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_de5cb362-ceae-4fe2-9999-2c22c1c438c2 became leader Jan 28 20:02:42.842: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2052b0a5-4de3-41f7-abae-084298efc321 became leader Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-tc6bx to bootstrap-e2e-minion-group-g3s5 Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 5.225574521s (5.225582217s including waiting) Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container autoscaler Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container autoscaler Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container autoscaler Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-tc6bx_kube-system(68e7acff-d47c-41a3-999e-81f6e6886b77) Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-tc6bx Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container autoscaler Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container autoscaler Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-tc6bx Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-0n1r_kube-system(9b011e80d8dc05f3f14727717fa821a7) Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-g3s5_kube-system(926ffa386cd1d6d2268581c1ed0b2f8c) Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-mh3p_kube-system(b150875e2fb427d0806b8243d6a9b58f) Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-mh3p_kube-system(b150875e2fb427d0806b8243d6a9b58f) Jan 28 20:02:42.842: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:02:42.842: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 28 20:02:42.842: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 28 20:02:42.842: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 28 20:02:42.842: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(51babbd1f81b742b53c210ccd4aba348) Jan 28 20:02:42.842: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_6d3679c9-8b91-439b-8dd5-7d1b052b0f95 became leader Jan 28 20:02:42.842: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_97f512eb-1061-47dc-9e27-98f52ceebe45 became leader Jan 28 20:02:42.842: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_75e50ff1-aee4-4d42-a84f-b94251206449 became leader Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-dgcll to bootstrap-e2e-minion-group-g3s5 Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 2.30054303s (2.300570468s including waiting) Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container default-http-backend Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container default-http-backend Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-dgcll Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container default-http-backend Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container default-http-backend Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-dgcll Jan 28 20:02:42.842: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 28 20:02:42.842: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 28 20:02:42.842: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 28 20:02:42.842: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 28 20:02:42.842: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-45m2p to bootstrap-e2e-minion-group-mh3p Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 847.414224ms (847.440914ms including waiting) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.820556539s (1.820574424s including waiting) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-4b9h5 to bootstrap-e2e-master Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 880.932728ms (880.940631ms including waiting) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.873485565s (1.873503664s including waiting) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-nsst5 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 663.380312ms (663.388707ms including waiting) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.719868155s (1.719885142s including waiting) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-sdzdk to bootstrap-e2e-minion-group-0n1r Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 712.939789ms (712.956274ms including waiting) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.695636692s (1.695660104s including waiting) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-4b9h5 Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-45m2p Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-sdzdk Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-nsst5 Jan 28 20:02:42.842: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:02:42.842: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:02:42.842: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-lwrsb to bootstrap-e2e-minion-group-g3s5 Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.471766127s (3.471785385s including waiting) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.674813094s (2.674841129s including waiting) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-lwrsb Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-lwrsb Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-zddjc to bootstrap-e2e-minion-group-0n1r Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.258017443s (1.258032513s including waiting) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 935.578053ms (935.586846ms including waiting) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": dial tcp 10.64.2.3:10250: connect: connection refused Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": dial tcp 10.64.2.3:10250: connect: connection refused Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-zddjc Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.7:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.7:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Container metrics-server failed liveness probe, will be restarted Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Failed: Error: failed to get sandbox container task: no running task found: task 93118149c87c74675ce0d5095e2845a398f21d95fd8ae04827f4f38ded7adf60 not found: not found Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-zddjc_kube-system(75bf20cf-455a-48e7-8784-bd1f4f74d211) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-zddjc_kube-system(75bf20cf-455a-48e7-8784-bd1f4f74d211) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-zddjc Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 1.912364661s (1.912373502s including waiting) Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container volume-snapshot-controller Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container volume-snapshot-controller Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container volume-snapshot-controller Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(b6b28b8a-55e3-411f-8ff1-7da0eec83766) Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container volume-snapshot-controller Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container volume-snapshot-controller Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container volume-snapshot-controller Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(b6b28b8a-55e3-411f-8ff1-7da0eec83766) Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 20:02:42.843 (58ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 20:02:42.843 Jan 28 20:02:42.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 20:02:42.885 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 20:02:42.885 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 20:02:42.886 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 20:02:42.886 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 20:02:42.886 STEP: Collecting events from namespace "reboot-1994". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 20:02:42.886 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/28/23 20:02:42.927 Jan 28 20:02:42.967: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 20:02:42.967: INFO: Jan 28 20:02:43.010: INFO: Logging node info for node bootstrap-e2e-master Jan 28 20:02:43.063: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 970b6f6f-4e1a-46c9-acbf-59a10a5407de 2158 0 2023-01-28 19:51:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-28 19:51:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-28 19:51:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:01:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:09 +0000 UTC,LastTransitionTime:2023-01-28 19:51:09 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:01:38 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:01:38 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:01:38 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:01:38 +0000 UTC,LastTransitionTime:2023-01-28 19:51:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.117.50,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3a4f647927569fb58286b9195c204539,SystemUUID:3a4f6479-2756-9fb5-8286-b9195c204539,BootID:8ef6f2d0-a90b-49fd-85d7-23425f9c3021,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:57552182,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:02:43.064: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 28 20:02:43.109: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 28 20:03:13.152: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: context deadline exceeded: connection error: desc = "transport: Error while dialing dial unix /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket: connect: no such file or directory" Jan 28 20:03:13.152: INFO: Logging node info for node bootstrap-e2e-minion-group-0n1r Jan 28 20:03:13.194: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0n1r 46df1b17-a913-4228-816e-be74f36b3df3 2359 0 2023-01-28 19:51:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0n1r kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-01-28 19:59:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:02:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-minion-group-0n1r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 19:59:46 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 19:59:46 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 19:59:46 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 19:59:46 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 19:59:46 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 19:59:46 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 19:59:46 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:23 +0000 UTC,LastTransitionTime:2023-01-28 19:51:23 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:37 +0000 UTC,LastTransitionTime:2023-01-28 20:02:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:37 +0000 UTC,LastTransitionTime:2023-01-28 20:02:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:37 +0000 UTC,LastTransitionTime:2023-01-28 20:02:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:02:37 +0000 UTC,LastTransitionTime:2023-01-28 20:02:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.127.122.120,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0n1r.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0n1r.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:270d4de2627654ef8c167cb0cf2b2d0a,SystemUUID:270d4de2-6276-54ef-8c16-7cb0cf2b2d0a,BootID:ae0c19ff-aa1d-4907-bca0-33ead0657727,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:03:13.194: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-0n1r Jan 28 20:03:13.239: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0n1r Jan 28 20:03:18.333: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-0n1r: error trying to reach service: No agent available Jan 28 20:03:18.333: INFO: Logging node info for node bootstrap-e2e-minion-group-g3s5 Jan 28 20:03:18.375: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-g3s5 1a727c84-81d4-4cc8-ad06-17830501909f 2314 0 2023-01-28 19:51:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-g3s5 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-01-28 20:00:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:02:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-minion-group-g3s5,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:23 +0000 UTC,LastTransitionTime:2023-01-28 19:51:23 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.145.35.125,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-g3s5.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-g3s5.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:79d547ef2c0f438965bed79c8c4eb57b,SystemUUID:79d547ef-2c0f-4389-65be-d79c8c4eb57b,BootID:6e605608-983d-4a6d-accb-1ee26169e2b6,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:03:18.376: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-g3s5 Jan 28 20:03:18.421: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-g3s5 Jan 28 20:03:18.464: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-g3s5: error trying to reach service: No agent available Jan 28 20:03:18.464: INFO: Logging node info for node bootstrap-e2e-minion-group-mh3p Jan 28 20:03:18.506: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-mh3p 2d56d4de-a7bd-4a59-aa22-a6e8981cfd7e 2469 0 2023-01-28 19:51:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-mh3p kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:02:35 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 20:03:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-minion-group-mh3p,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:23 +0000 UTC,LastTransitionTime:2023-01-28 19:51:23 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.72.159,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-mh3p.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-mh3p.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5adcca49c54c440dcbf0f8686b780b6a,SystemUUID:5adcca49-c54c-440d-cbf0-f8686b780b6a,BootID:8cd287c9-c967-4df3-9019-7e693ad4e8a0,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:03:18.507: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-mh3p Jan 28 20:03:18.553: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-mh3p Jan 28 20:03:18.596: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-mh3p: error trying to reach service: No agent available END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 20:03:18.596 (35.71s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 20:03:18.596 (35.71s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 20:03:18.596 STEP: Destroying namespace "reboot-1994" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 20:03:18.596 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 20:03:18.641 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 20:03:18.641 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 20:03:18.641 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 20:02:42.785from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 20:00:24.067 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 20:00:24.067 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 20:00:24.067 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 20:00:24.067 Jan 28 20:00:24.067: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 20:00:24.068 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 20:00:24.193 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 20:00:24.273 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 20:00:24.354 (287ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 20:00:24.354 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 20:00:24.354 (0s) > Enter [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/28/23 20:00:24.354 Jan 28 20:00:24.448: INFO: Getting bootstrap-e2e-minion-group-g3s5 Jan 28 20:00:24.449: INFO: Getting bootstrap-e2e-minion-group-mh3p Jan 28 20:00:24.449: INFO: Getting bootstrap-e2e-minion-group-0n1r Jan 28 20:00:24.524: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-0n1r condition Ready to be true Jan 28 20:00:24.524: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-g3s5 condition Ready to be true Jan 28 20:00:24.524: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-mh3p condition Ready to be true Jan 28 20:00:24.569: INFO: Node bootstrap-e2e-minion-group-0n1r has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-0n1r metadata-proxy-v0.1-sdzdk] Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-0n1r metadata-proxy-v0.1-sdzdk] Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-sdzdk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:00:24.569: INFO: Node bootstrap-e2e-minion-group-g3s5 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5 volume-snapshot-controller-0] Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5 volume-snapshot-controller-0] Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:00:24.569: INFO: Node bootstrap-e2e-minion-group-mh3p has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-mh3p metadata-proxy-v0.1-45m2p] Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-mh3p metadata-proxy-v0.1-45m2p] Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-45m2p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-0n1r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-tc6bx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:00:24.569: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-g3s5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:00:24.570: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-nsst5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:00:24.570: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-mh3p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:00:24.615: INFO: Pod "kube-dns-autoscaler-5f6455f985-tc6bx": Phase="Running", Reason="", readiness=true. Elapsed: 45.714485ms Jan 28 20:00:24.615: INFO: Pod "kube-dns-autoscaler-5f6455f985-tc6bx" satisfied condition "running and ready, or succeeded" Jan 28 20:00:24.616: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 46.596568ms Jan 28 20:00:24.616: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 20:00:24.617: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g3s5": Phase="Running", Reason="", readiness=true. Elapsed: 47.414966ms Jan 28 20:00:24.617: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g3s5" satisfied condition "running and ready, or succeeded" Jan 28 20:00:24.617: INFO: Pod "metadata-proxy-v0.1-45m2p": Phase="Running", Reason="", readiness=true. Elapsed: 47.835449ms Jan 28 20:00:24.617: INFO: Pod "metadata-proxy-v0.1-45m2p" satisfied condition "running and ready, or succeeded" Jan 28 20:00:24.617: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0n1r": Phase="Running", Reason="", readiness=true. Elapsed: 47.778778ms Jan 28 20:00:24.617: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0n1r" satisfied condition "running and ready, or succeeded" Jan 28 20:00:24.617: INFO: Pod "metadata-proxy-v0.1-sdzdk": Phase="Running", Reason="", readiness=true. Elapsed: 48.317228ms Jan 28 20:00:24.617: INFO: Pod "metadata-proxy-v0.1-sdzdk" satisfied condition "running and ready, or succeeded" Jan 28 20:00:24.617: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-0n1r metadata-proxy-v0.1-sdzdk] Jan 28 20:00:24.617: INFO: Getting external IP address for bootstrap-e2e-minion-group-0n1r Jan 28 20:00:24.617: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-0n1r(34.127.122.120:22) Jan 28 20:00:24.617: INFO: Pod "metadata-proxy-v0.1-nsst5": Phase="Running", Reason="", readiness=true. Elapsed: 47.693562ms Jan 28 20:00:24.617: INFO: Pod "metadata-proxy-v0.1-nsst5" satisfied condition "running and ready, or succeeded" Jan 28 20:00:24.617: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5 volume-snapshot-controller-0] Jan 28 20:00:24.617: INFO: Getting external IP address for bootstrap-e2e-minion-group-g3s5 Jan 28 20:00:24.617: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-g3s5(34.145.35.125:22) Jan 28 20:00:24.618: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-mh3p": Phase="Running", Reason="", readiness=true. Elapsed: 48.326316ms Jan 28 20:00:24.618: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-mh3p" satisfied condition "running and ready, or succeeded" Jan 28 20:00:24.618: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-mh3p metadata-proxy-v0.1-45m2p] Jan 28 20:00:24.618: INFO: Getting external IP address for bootstrap-e2e-minion-group-mh3p Jan 28 20:00:24.618: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-mh3p(34.168.72.159:22) Jan 28 20:00:25.159: INFO: ssh prow@34.168.72.159:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 28 20:00:25.159: INFO: ssh prow@34.168.72.159:22: stdout: "" Jan 28 20:00:25.159: INFO: ssh prow@34.168.72.159:22: stderr: "" Jan 28 20:00:25.159: INFO: ssh prow@34.168.72.159:22: exit code: 0 Jan 28 20:00:25.159: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-mh3p condition Ready to be false Jan 28 20:00:25.169: INFO: ssh prow@34.145.35.125:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 28 20:00:25.169: INFO: ssh prow@34.145.35.125:22: stdout: "" Jan 28 20:00:25.169: INFO: ssh prow@34.145.35.125:22: stderr: "" Jan 28 20:00:25.169: INFO: ssh prow@34.145.35.125:22: exit code: 0 Jan 28 20:00:25.169: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-g3s5 condition Ready to be false Jan 28 20:00:25.172: INFO: ssh prow@34.127.122.120:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 28 20:00:25.172: INFO: ssh prow@34.127.122.120:22: stdout: "" Jan 28 20:00:25.172: INFO: ssh prow@34.127.122.120:22: stderr: "" Jan 28 20:00:25.172: INFO: ssh prow@34.127.122.120:22: exit code: 0 Jan 28 20:00:25.172: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-0n1r condition Ready to be false Jan 28 20:00:25.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:25.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:25.214: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:27.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:27.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:27.260: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:29.291: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:29.298: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:29.302: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:31.333: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:31.340: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:31.346: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:33.376: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:33.382: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:33.389: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:35.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:35.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:35.432: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:37.464: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:37.469: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:37.475: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:39.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:39.511: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:39.517: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:41.549: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:41.553: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:41.559: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:43.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:43.596: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:43.601: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:45.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:45.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:45.643: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:47.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:47.681: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:47.689: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:49.723: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:49.724: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:49.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:51.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:51.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:51.774: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:53.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:53.810: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:53.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:55.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:55.853: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:55.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:57.892: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:57.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:57.901: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:59.934: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:59.937: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:00:59.944: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:01.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:01.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:01.986: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:04.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:04.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:04.029: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:06.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:06.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:06.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:08.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:08.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:08.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:10.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:10.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:10.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:12.214: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:12.215: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:12.215: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:14.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:14.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:14.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:16.309: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:16.309: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:16.309: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:18.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:18.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:18.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:20.402: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:20.402: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:20.403: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:22.448: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:22.448: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:22.448: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:24.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:24.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:24.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:26.540: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:26.540: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:26.540: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:28.586: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:28.586: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:28.586: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:30.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:30.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:30.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:32.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:32.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:32.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:34.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:34.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:34.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:36.777: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:36.777: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:36.778: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:38.824: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:38.824: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:38.824: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:40.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:40.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:40.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:42.914: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:42.914: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:42.914: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:44.958: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:44.958: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:44.958: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:47.002: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:47.003: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:47.003: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:49.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:49.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:49.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:51.094: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:51.094: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:51.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:53.143: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:53.143: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:53.143: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:55.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:55.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:55.190: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:57.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:57.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:57.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:59.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:59.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:01:59.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:01.328: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:01.328: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:01.328: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:03.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:03.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:03.374: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:05.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:05.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:05.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:07.466: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:07.466: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:07.466: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:09.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:09.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:09.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:11.558: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:11.558: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:11.558: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:13.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:13.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:13.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:15.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:15.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:15.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:17.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:17.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:17.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:19.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:19.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:19.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:21.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:21.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:21.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:23.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:23.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:23.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:02:25.832: INFO: Node bootstrap-e2e-minion-group-0n1r didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:02:25.832: INFO: Node bootstrap-e2e-minion-group-g3s5 didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:02:25.832: INFO: Node bootstrap-e2e-minion-group-mh3p didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:02:25.832: INFO: Node bootstrap-e2e-minion-group-0n1r failed reboot test. Jan 28 20:02:25.832: INFO: Node bootstrap-e2e-minion-group-g3s5 failed reboot test. Jan 28 20:02:25.832: INFO: Node bootstrap-e2e-minion-group-mh3p failed reboot test. Jan 28 20:02:25.833: INFO: Executing termination hook on nodes Jan 28 20:02:25.833: INFO: Getting external IP address for bootstrap-e2e-minion-group-0n1r Jan 28 20:02:25.833: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-0n1r(34.127.122.120:22) Jan 28 20:02:41.742: INFO: ssh prow@34.127.122.120:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 28 20:02:41.742: INFO: ssh prow@34.127.122.120:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 20:00:35 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 20:02:41.742: INFO: ssh prow@34.127.122.120:22: stderr: "" Jan 28 20:02:41.742: INFO: ssh prow@34.127.122.120:22: exit code: 0 Jan 28 20:02:41.742: INFO: Getting external IP address for bootstrap-e2e-minion-group-g3s5 Jan 28 20:02:41.742: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-g3s5(34.145.35.125:22) Jan 28 20:02:42.262: INFO: ssh prow@34.145.35.125:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 28 20:02:42.262: INFO: ssh prow@34.145.35.125:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 20:00:35 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 20:02:42.262: INFO: ssh prow@34.145.35.125:22: stderr: "" Jan 28 20:02:42.262: INFO: ssh prow@34.145.35.125:22: exit code: 0 Jan 28 20:02:42.262: INFO: Getting external IP address for bootstrap-e2e-minion-group-mh3p Jan 28 20:02:42.262: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-mh3p(34.168.72.159:22) Jan 28 20:02:42.784: INFO: ssh prow@34.168.72.159:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 28 20:02:42.784: INFO: ssh prow@34.168.72.159:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 20:00:35 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 20:02:42.784: INFO: ssh prow@34.168.72.159:22: stderr: "" Jan 28 20:02:42.784: INFO: ssh prow@34.168.72.159:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 20:02:42.785 < Exit [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/28/23 20:02:42.785 (2m18.431s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 20:02:42.785 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 20:02:42.785 Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-5f95b to bootstrap-e2e-minion-group-mh3p Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 988.64865ms (988.660887ms including waiting) Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-5f95b Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-5f95b_kube-system(d963f1ba-8d39-4169-912a-3ea2b305ba4d) Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: Get "http://10.64.1.11:8181/ready": dial tcp 10.64.1.11:8181: connect: connection refused Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-zkf5q to bootstrap-e2e-minion-group-g3s5 Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 4.754015323s (4.754025827s including waiting) Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-zkf5q_kube-system(bc56bd34-3571-4e4b-abe7-beb82134f4e9) Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-zkf5q Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:8181/ready": dial tcp 10.64.3.24:8181: connect: connection refused Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container coredns Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-zkf5q_kube-system(bc56bd34-3571-4e4b-abe7-beb82134f4e9) Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.30:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-zkf5q Jan 28 20:02:42.842: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-5f95b Jan 28 20:02:42.842: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 20:02:42.842: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 20:02:42.842: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 20:02:42.842: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 20:02:42.842: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 20:02:42.842: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 20:02:42.842: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(29ec3e483e58679ee5f59a6031c5e501) Jan 28 20:02:42.842: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 20:02:42.842: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 20:02:42.842: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 20:02:42.842: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_513c5 became leader Jan 28 20:02:42.842: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1b6de became leader Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-6x7kx to bootstrap-e2e-minion-group-mh3p Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 638.516592ms (638.533876ms including waiting) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6x7kx_kube-system(ed70439e-4bcd-45f3-ab80-c3443614cb7f) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6x7kx_kube-system(ed70439e-4bcd-45f3-ab80-c3443614cb7f) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Liveness probe failed: Get "http://10.64.1.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-qb4t9 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.519410591s (2.519418935s including waiting) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-qb4t9_kube-system(c535b342-76b5-479d-8f04-e96ca247dfe5) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Liveness probe failed: Get "http://10.64.3.26:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-xvpcb to bootstrap-e2e-minion-group-0n1r Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 636.231986ms (636.24567ms including waiting) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": dial tcp 10.64.2.2:8093: connect: connection refused Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container konnectivity-agent Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.8:8093/healthz": dial tcp 10.64.2.8:8093: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.8:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 20:02:42.842: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-qb4t9 Jan 28 20:02:42.842: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-xvpcb Jan 28 20:02:42.842: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-6x7kx Jan 28 20:02:42.842: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 20:02:42.842: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 20:02:42.842: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 28 20:02:42.842: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 28 20:02:42.842: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 28 20:02:42.842: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 28 20:02:42.842: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 28 20:02:42.842: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 28 20:02:42.842: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 28 20:02:42.842: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:02:42.842: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:02:42.842: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 28 20:02:42.842: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 28 20:02:42.842: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 28 20:02:42.842: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(f70ce176158303a9ebd031d7e3b6127a) Jan 28 20:02:42.842: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_3195f2fa-43b4-44c6-99b9-48340126a997 became leader Jan 28 20:02:42.842: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_79df5a90-5f1c-4226-91be-48b6f9dbf1b4 became leader Jan 28 20:02:42.842: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_de5cb362-ceae-4fe2-9999-2c22c1c438c2 became leader Jan 28 20:02:42.842: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2052b0a5-4de3-41f7-abae-084298efc321 became leader Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-tc6bx to bootstrap-e2e-minion-group-g3s5 Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 5.225574521s (5.225582217s including waiting) Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container autoscaler Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container autoscaler Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container autoscaler Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-tc6bx_kube-system(68e7acff-d47c-41a3-999e-81f6e6886b77) Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-tc6bx Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container autoscaler Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container autoscaler Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-tc6bx Jan 28 20:02:42.842: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-0n1r_kube-system(9b011e80d8dc05f3f14727717fa821a7) Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-g3s5_kube-system(926ffa386cd1d6d2268581c1ed0b2f8c) Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-mh3p_kube-system(b150875e2fb427d0806b8243d6a9b58f) Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container kube-proxy Jan 28 20:02:42.842: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-mh3p_kube-system(b150875e2fb427d0806b8243d6a9b58f) Jan 28 20:02:42.842: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:02:42.842: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 28 20:02:42.842: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 28 20:02:42.842: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 28 20:02:42.842: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(51babbd1f81b742b53c210ccd4aba348) Jan 28 20:02:42.842: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_6d3679c9-8b91-439b-8dd5-7d1b052b0f95 became leader Jan 28 20:02:42.842: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_97f512eb-1061-47dc-9e27-98f52ceebe45 became leader Jan 28 20:02:42.842: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_75e50ff1-aee4-4d42-a84f-b94251206449 became leader Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-dgcll to bootstrap-e2e-minion-group-g3s5 Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 2.30054303s (2.300570468s including waiting) Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container default-http-backend Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container default-http-backend Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-dgcll Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container default-http-backend Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container default-http-backend Jan 28 20:02:42.842: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-dgcll Jan 28 20:02:42.842: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 28 20:02:42.842: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 28 20:02:42.842: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 28 20:02:42.842: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 28 20:02:42.842: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-45m2p to bootstrap-e2e-minion-group-mh3p Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 847.414224ms (847.440914ms including waiting) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.820556539s (1.820574424s including waiting) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-4b9h5 to bootstrap-e2e-master Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 880.932728ms (880.940631ms including waiting) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.873485565s (1.873503664s including waiting) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-nsst5 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 663.380312ms (663.388707ms including waiting) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.719868155s (1.719885142s including waiting) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-sdzdk to bootstrap-e2e-minion-group-0n1r Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 712.939789ms (712.956274ms including waiting) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.695636692s (1.695660104s including waiting) Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metadata-proxy Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container prometheus-to-sd-exporter Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-4b9h5 Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-45m2p Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-sdzdk Jan 28 20:02:42.842: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-nsst5 Jan 28 20:02:42.842: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:02:42.842: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:02:42.842: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-lwrsb to bootstrap-e2e-minion-group-g3s5 Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.471766127s (3.471785385s including waiting) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.674813094s (2.674841129s including waiting) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-lwrsb Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-lwrsb Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-zddjc to bootstrap-e2e-minion-group-0n1r Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.258017443s (1.258032513s including waiting) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 935.578053ms (935.586846ms including waiting) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": dial tcp 10.64.2.3:10250: connect: connection refused Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": dial tcp 10.64.2.3:10250: connect: connection refused Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-zddjc Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.7:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.7:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server-nanny Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Container metrics-server failed liveness probe, will be restarted Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Failed: Error: failed to get sandbox container task: no running task found: task 93118149c87c74675ce0d5095e2845a398f21d95fd8ae04827f4f38ded7adf60 not found: not found Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-zddjc_kube-system(75bf20cf-455a-48e7-8784-bd1f4f74d211) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-zddjc_kube-system(75bf20cf-455a-48e7-8784-bd1f4f74d211) Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-zddjc Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 28 20:02:42.843: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 1.912364661s (1.912373502s including waiting) Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container volume-snapshot-controller Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container volume-snapshot-controller Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container volume-snapshot-controller Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(b6b28b8a-55e3-411f-8ff1-7da0eec83766) Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container volume-snapshot-controller Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container volume-snapshot-controller Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container volume-snapshot-controller Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(b6b28b8a-55e3-411f-8ff1-7da0eec83766) Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:02:42.843: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 20:02:42.843 (58ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 20:02:42.843 Jan 28 20:02:42.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 20:02:42.885 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 20:02:42.885 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 20:02:42.886 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 20:02:42.886 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 20:02:42.886 STEP: Collecting events from namespace "reboot-1994". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 20:02:42.886 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/28/23 20:02:42.927 Jan 28 20:02:42.967: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 20:02:42.967: INFO: Jan 28 20:02:43.010: INFO: Logging node info for node bootstrap-e2e-master Jan 28 20:02:43.063: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 970b6f6f-4e1a-46c9-acbf-59a10a5407de 2158 0 2023-01-28 19:51:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-28 19:51:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-28 19:51:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:01:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:09 +0000 UTC,LastTransitionTime:2023-01-28 19:51:09 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:01:38 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:01:38 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:01:38 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:01:38 +0000 UTC,LastTransitionTime:2023-01-28 19:51:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.117.50,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3a4f647927569fb58286b9195c204539,SystemUUID:3a4f6479-2756-9fb5-8286-b9195c204539,BootID:8ef6f2d0-a90b-49fd-85d7-23425f9c3021,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:57552182,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:02:43.064: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 28 20:02:43.109: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 28 20:03:13.152: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: context deadline exceeded: connection error: desc = "transport: Error while dialing dial unix /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket: connect: no such file or directory" Jan 28 20:03:13.152: INFO: Logging node info for node bootstrap-e2e-minion-group-0n1r Jan 28 20:03:13.194: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0n1r 46df1b17-a913-4228-816e-be74f36b3df3 2359 0 2023-01-28 19:51:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0n1r kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-01-28 19:59:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:02:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-minion-group-0n1r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 19:59:46 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 19:59:46 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 19:59:46 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 19:59:46 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 19:59:46 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 19:59:46 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 19:59:46 +0000 UTC,LastTransitionTime:2023-01-28 19:59:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:23 +0000 UTC,LastTransitionTime:2023-01-28 19:51:23 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:37 +0000 UTC,LastTransitionTime:2023-01-28 20:02:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:37 +0000 UTC,LastTransitionTime:2023-01-28 20:02:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:37 +0000 UTC,LastTransitionTime:2023-01-28 20:02:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:02:37 +0000 UTC,LastTransitionTime:2023-01-28 20:02:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.127.122.120,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0n1r.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0n1r.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:270d4de2627654ef8c167cb0cf2b2d0a,SystemUUID:270d4de2-6276-54ef-8c16-7cb0cf2b2d0a,BootID:ae0c19ff-aa1d-4907-bca0-33ead0657727,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:03:13.194: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-0n1r Jan 28 20:03:13.239: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0n1r Jan 28 20:03:18.333: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-0n1r: error trying to reach service: No agent available Jan 28 20:03:18.333: INFO: Logging node info for node bootstrap-e2e-minion-group-g3s5 Jan 28 20:03:18.375: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-g3s5 1a727c84-81d4-4cc8-ad06-17830501909f 2314 0 2023-01-28 19:51:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-g3s5 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-01-28 20:00:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:02:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-minion-group-g3s5,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 20:00:15 +0000 UTC,LastTransitionTime:2023-01-28 20:00:14 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:23 +0000 UTC,LastTransitionTime:2023-01-28 19:51:23 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.145.35.125,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-g3s5.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-g3s5.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:79d547ef2c0f438965bed79c8c4eb57b,SystemUUID:79d547ef-2c0f-4389-65be-d79c8c4eb57b,BootID:6e605608-983d-4a6d-accb-1ee26169e2b6,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:03:18.376: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-g3s5 Jan 28 20:03:18.421: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-g3s5 Jan 28 20:03:18.464: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-g3s5: error trying to reach service: No agent available Jan 28 20:03:18.464: INFO: Logging node info for node bootstrap-e2e-minion-group-mh3p Jan 28 20:03:18.506: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-mh3p 2d56d4de-a7bd-4a59-aa22-a6e8981cfd7e 2469 0 2023-01-28 19:51:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-mh3p kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-28 20:02:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:02:35 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 20:03:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-minion-group-mh3p,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 20:03:16 +0000 UTC,LastTransitionTime:2023-01-28 19:58:15 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:23 +0000 UTC,LastTransitionTime:2023-01-28 19:51:23 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:02:35 +0000 UTC,LastTransitionTime:2023-01-28 20:02:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.72.159,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-mh3p.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-mh3p.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5adcca49c54c440dcbf0f8686b780b6a,SystemUUID:5adcca49-c54c-440d-cbf0-f8686b780b6a,BootID:8cd287c9-c967-4df3-9019-7e693ad4e8a0,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:03:18.507: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-mh3p Jan 28 20:03:18.553: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-mh3p Jan 28 20:03:18.596: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-mh3p: error trying to reach service: No agent available END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 20:03:18.596 (35.71s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 20:03:18.596 (35.71s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 20:03:18.596 STEP: Destroying namespace "reboot-1994" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 20:03:18.596 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 20:03:18.641 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 20:03:18.641 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 20:03:18.641 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\striggering\skernel\spanic\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 20:09:51.087from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 20:07:49.386 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 20:07:49.386 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 20:07:49.386 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 20:07:49.386 Jan 28 20:07:49.386: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 20:07:49.388 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 20:07:49.519 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 20:07:49.599 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 20:07:49.679 (293ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 20:07:49.679 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 20:07:49.679 (0s) > Enter [It] each node by triggering kernel panic and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:109 @ 01/28/23 20:07:49.679 Jan 28 20:07:49.774: INFO: Getting bootstrap-e2e-minion-group-mh3p Jan 28 20:07:49.774: INFO: Getting bootstrap-e2e-minion-group-g3s5 Jan 28 20:07:49.774: INFO: Getting bootstrap-e2e-minion-group-0n1r Jan 28 20:07:49.815: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-mh3p condition Ready to be true Jan 28 20:07:49.847: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-0n1r condition Ready to be true Jan 28 20:07:49.847: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-g3s5 condition Ready to be true Jan 28 20:07:49.857: INFO: Node bootstrap-e2e-minion-group-mh3p has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-mh3p metadata-proxy-v0.1-45m2p] Jan 28 20:07:49.857: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-mh3p metadata-proxy-v0.1-45m2p] Jan 28 20:07:49.857: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-45m2p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:07:49.857: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-mh3p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:07:49.889: INFO: Node bootstrap-e2e-minion-group-0n1r has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-sdzdk kube-proxy-bootstrap-e2e-minion-group-0n1r] Jan 28 20:07:49.889: INFO: Node bootstrap-e2e-minion-group-g3s5 has 4 assigned pods with no liveness probes: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5] Jan 28 20:07:49.889: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-sdzdk kube-proxy-bootstrap-e2e-minion-group-0n1r] Jan 28 20:07:49.889: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5] Jan 28 20:07:49.889: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-0n1r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:07:49.889: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-nsst5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:07:49.889: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:07:49.889: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-tc6bx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:07:49.889: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-g3s5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:07:49.889: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-sdzdk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:07:49.900: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-mh3p": Phase="Running", Reason="", readiness=true. Elapsed: 42.79064ms Jan 28 20:07:49.900: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-mh3p" satisfied condition "running and ready, or succeeded" Jan 28 20:07:49.900: INFO: Pod "metadata-proxy-v0.1-45m2p": Phase="Running", Reason="", readiness=true. Elapsed: 42.842201ms Jan 28 20:07:49.900: INFO: Pod "metadata-proxy-v0.1-45m2p" satisfied condition "running and ready, or succeeded" Jan 28 20:07:49.900: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-mh3p metadata-proxy-v0.1-45m2p] Jan 28 20:07:49.900: INFO: Getting external IP address for bootstrap-e2e-minion-group-mh3p Jan 28 20:07:49.900: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-mh3p(34.168.72.159:22) Jan 28 20:07:49.934: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 45.062405ms Jan 28 20:07:49.934: INFO: Pod "kube-dns-autoscaler-5f6455f985-tc6bx": Phase="Running", Reason="", readiness=true. Elapsed: 45.003109ms Jan 28 20:07:49.934: INFO: Pod "kube-dns-autoscaler-5f6455f985-tc6bx" satisfied condition "running and ready, or succeeded" Jan 28 20:07:49.934: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 20:07:49.935: INFO: Pod "metadata-proxy-v0.1-nsst5": Phase="Running", Reason="", readiness=true. Elapsed: 46.105666ms Jan 28 20:07:49.935: INFO: Pod "metadata-proxy-v0.1-nsst5" satisfied condition "running and ready, or succeeded" Jan 28 20:07:49.935: INFO: Pod "metadata-proxy-v0.1-sdzdk": Phase="Running", Reason="", readiness=true. Elapsed: 45.923711ms Jan 28 20:07:49.935: INFO: Pod "metadata-proxy-v0.1-sdzdk" satisfied condition "running and ready, or succeeded" Jan 28 20:07:49.935: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g3s5": Phase="Running", Reason="", readiness=true. Elapsed: 46.01751ms Jan 28 20:07:49.935: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g3s5" satisfied condition "running and ready, or succeeded" Jan 28 20:07:49.935: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5] Jan 28 20:07:49.935: INFO: Getting external IP address for bootstrap-e2e-minion-group-g3s5 Jan 28 20:07:49.935: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-g3s5(34.145.35.125:22) Jan 28 20:07:49.935: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0n1r": Phase="Running", Reason="", readiness=true. Elapsed: 46.224157ms Jan 28 20:07:49.935: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0n1r" satisfied condition "running and ready, or succeeded" Jan 28 20:07:49.935: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-sdzdk kube-proxy-bootstrap-e2e-minion-group-0n1r] Jan 28 20:07:49.935: INFO: Getting external IP address for bootstrap-e2e-minion-group-0n1r Jan 28 20:07:49.935: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-0n1r(34.127.122.120:22) Jan 28 20:07:50.433: INFO: ssh prow@34.168.72.159:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 28 20:07:50.433: INFO: ssh prow@34.168.72.159:22: stdout: "" Jan 28 20:07:50.433: INFO: ssh prow@34.168.72.159:22: stderr: "" Jan 28 20:07:50.433: INFO: ssh prow@34.168.72.159:22: exit code: 0 Jan 28 20:07:50.433: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-mh3p condition Ready to be false Jan 28 20:07:50.456: INFO: ssh prow@34.127.122.120:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 28 20:07:50.456: INFO: ssh prow@34.127.122.120:22: stdout: "" Jan 28 20:07:50.456: INFO: ssh prow@34.127.122.120:22: stderr: "" Jan 28 20:07:50.456: INFO: ssh prow@34.127.122.120:22: exit code: 0 Jan 28 20:07:50.456: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-0n1r condition Ready to be false Jan 28 20:07:50.468: INFO: ssh prow@34.145.35.125:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 28 20:07:50.468: INFO: ssh prow@34.145.35.125:22: stdout: "" Jan 28 20:07:50.468: INFO: ssh prow@34.145.35.125:22: stderr: "" Jan 28 20:07:50.468: INFO: ssh prow@34.145.35.125:22: exit code: 0 Jan 28 20:07:50.468: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-g3s5 condition Ready to be false Jan 28 20:07:50.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:50.498: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:50.510: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:52.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:52.541: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:52.553: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:54.564: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:54.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:54.601: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:56.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:56.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:56.643: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:58.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:58.683: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:58.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:00.692: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:00.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:00.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:02.736: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:02.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:02.770: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:04.780: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:04.810: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:04.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:06.822: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:06.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:06.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:08.865: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:08.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:08.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:10.908: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:10.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:10.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:12.950: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:13.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:13.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:14.993: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:15.054: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:15.054: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:17.039: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:17.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:17.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:19.081: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:19.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:19.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:21.121: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:21.184: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:21.184: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:23.162: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:23.224: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:23.224: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:25.202: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:25.264: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:25.264: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:27.241: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:27.304: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:27.304: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:29.282: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:29.344: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:29.344: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:31.321: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:31.384: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:31.384: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:33.361: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:33.424: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:33.424: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:35.401: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:35.464: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:35.464: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:37.442: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:37.504: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:37.504: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:39.483: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:39.544: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:39.544: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:41.523: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:41.584: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:41.584: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:43.563: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:43.624: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:43.624: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:45.603: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:45.664: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:45.664: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:47.644: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:47.705: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:47.705: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:49.685: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:49.744: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:49.745: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:51.725: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:51.785: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:51.785: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:57.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:57.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:57.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:59.855: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:59.856: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:59.857: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:01.904: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:01.904: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:01.904: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:03.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:03.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:03.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:06.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:06.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:06.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:08.048: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:08.048: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:08.048: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:10.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:10.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:10.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:12.179: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:12.179: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:12.179: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:14.224: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:14.224: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:14.225: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:16.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:16.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:16.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:18.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:18.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:18.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:20.415: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:20.415: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:20.415: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:22.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:22.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:22.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:24.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:24.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:24.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:26.561: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:26.561: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:26.561: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:28.609: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:28.609: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:28.609: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:30.656: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:30.656: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:30.657: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:32.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:32.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:32.704: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:34.751: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:34.751: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:34.751: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:36.797: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:36.797: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:36.798: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:38.844: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:38.844: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:38.844: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:40.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:40.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:40.896: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:42.943: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:42.943: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:42.943: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:44.989: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:44.989: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:44.990: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:47.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:47.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:47.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:49.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:49.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:49.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:51.087: INFO: Node bootstrap-e2e-minion-group-0n1r didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:09:51.087: INFO: Node bootstrap-e2e-minion-group-mh3p didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:09:51.087: INFO: Node bootstrap-e2e-minion-group-g3s5 didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:09:51.087: INFO: Node bootstrap-e2e-minion-group-0n1r failed reboot test. Jan 28 20:09:51.087: INFO: Node bootstrap-e2e-minion-group-g3s5 failed reboot test. Jan 28 20:09:51.087: INFO: Node bootstrap-e2e-minion-group-mh3p failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 20:09:51.087 < Exit [It] each node by triggering kernel panic and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:109 @ 01/28/23 20:09:51.087 (2m1.408s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 20:09:51.087 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 20:09:51.087 Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-5f95b to bootstrap-e2e-minion-group-mh3p Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 988.64865ms (988.660887ms including waiting) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-5f95b Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-5f95b_kube-system(d963f1ba-8d39-4169-912a-3ea2b305ba4d) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: Get "http://10.64.1.11:8181/ready": dial tcp 10.64.1.11:8181: connect: connection refused Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: Get "http://10.64.1.13:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-5f95b Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-zkf5q to bootstrap-e2e-minion-group-g3s5 Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 4.754015323s (4.754025827s including waiting) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-zkf5q_kube-system(bc56bd34-3571-4e4b-abe7-beb82134f4e9) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-zkf5q Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:8181/ready": dial tcp 10.64.3.24:8181: connect: connection refused Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-zkf5q_kube-system(bc56bd34-3571-4e4b-abe7-beb82134f4e9) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.30:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.34:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-zkf5q Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-zkf5q_kube-system(bc56bd34-3571-4e4b-abe7-beb82134f4e9) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.46:8181/ready": dial tcp 10.64.3.46:8181: connect: connection refused Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-zkf5q Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-5f95b Jan 28 20:09:51.148: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 20:09:51.148: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 20:09:51.148: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 20:09:51.148: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 20:09:51.148: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 20:09:51.148: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 20:09:51.148: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(29ec3e483e58679ee5f59a6031c5e501) Jan 28 20:09:51.148: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 28 20:09:51.148: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 20:09:51.148: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 20:09:51.148: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 20:09:51.148: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_513c5 became leader Jan 28 20:09:51.148: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1b6de became leader Jan 28 20:09:51.148: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_16a51 became leader Jan 28 20:09:51.148: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_aecb1 became leader Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-6x7kx to bootstrap-e2e-minion-group-mh3p Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 638.516592ms (638.533876ms including waiting) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6x7kx_kube-system(ed70439e-4bcd-45f3-ab80-c3443614cb7f) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6x7kx_kube-system(ed70439e-4bcd-45f3-ab80-c3443614cb7f) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Liveness probe failed: Get "http://10.64.1.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Liveness probe failed: Get "http://10.64.1.14:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-qb4t9 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.519410591s (2.519418935s including waiting) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-qb4t9_kube-system(c535b342-76b5-479d-8f04-e96ca247dfe5) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Liveness probe failed: Get "http://10.64.3.26:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Failed: Error: failed to get sandbox container task: no running task found: task cc5844e86e91665c11906665c81f3d4c5211312c2df4be494c37e0261f046d15 not found: not found Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-qb4t9_kube-system(c535b342-76b5-479d-8f04-e96ca247dfe5) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Liveness probe failed: Get "http://10.64.3.33:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-xvpcb to bootstrap-e2e-minion-group-0n1r Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 636.231986ms (636.24567ms including waiting) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": dial tcp 10.64.2.2:8093: connect: connection refused Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.8:8093/healthz": dial tcp 10.64.2.8:8093: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.8:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-xvpcb_kube-system(989c550e-f120-4c1b-9c3a-6df4b3fdde4c) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-qb4t9 Jan 28 20:09:51.148: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-xvpcb Jan 28 20:09:51.148: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-6x7kx Jan 28 20:09:51.148: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 20:09:51.148: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 20:09:51.148: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 28 20:09:51.148: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 28 20:09:51.148: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 28 20:09:51.148: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 28 20:09:51.148: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 28 20:09:51.148: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 28 20:09:51.148: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:09:51.148: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 28 20:09:51.148: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 20:09:51.148: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 20:09:51.148: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.148: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 28 20:09:51.148: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 28 20:09:51.148: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 28 20:09:51.148: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(f70ce176158303a9ebd031d7e3b6127a) Jan 28 20:09:51.148: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_3195f2fa-43b4-44c6-99b9-48340126a997 became leader Jan 28 20:09:51.148: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_79df5a90-5f1c-4226-91be-48b6f9dbf1b4 became leader Jan 28 20:09:51.148: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_de5cb362-ceae-4fe2-9999-2c22c1c438c2 became leader Jan 28 20:09:51.148: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2052b0a5-4de3-41f7-abae-084298efc321 became leader Jan 28 20:09:51.148: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_35a816ba-3468-4255-96ae-1484bc9888a9 became leader Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-tc6bx to bootstrap-e2e-minion-group-g3s5 Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 5.225574521s (5.225582217s including waiting) Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container autoscaler Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container autoscaler Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container autoscaler Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-tc6bx_kube-system(68e7acff-d47c-41a3-999e-81f6e6886b77) Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-tc6bx Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container autoscaler Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container autoscaler Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container autoscaler Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-tc6bx_kube-system(68e7acff-d47c-41a3-999e-81f6e6886b77) Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-tc6bx Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container autoscaler Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container autoscaler Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-tc6bx Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-0n1r_kube-system(9b011e80d8dc05f3f14727717fa821a7) Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-g3s5_kube-system(926ffa386cd1d6d2268581c1ed0b2f8c) Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-g3s5_kube-system(926ffa386cd1d6d2268581c1ed0b2f8c) Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-mh3p_kube-system(b150875e2fb427d0806b8243d6a9b58f) Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-mh3p_kube-system(b150875e2fb427d0806b8243d6a9b58f) Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.149: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 28 20:09:51.149: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 28 20:09:51.149: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 28 20:09:51.149: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(51babbd1f81b742b53c210ccd4aba348) Jan 28 20:09:51.149: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_6d3679c9-8b91-439b-8dd5-7d1b052b0f95 became leader Jan 28 20:09:51.149: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_97f512eb-1061-47dc-9e27-98f52ceebe45 became leader Jan 28 20:09:51.149: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_75e50ff1-aee4-4d42-a84f-b94251206449 became leader Jan 28 20:09:51.149: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_a91b85d9-8fac-4b8a-83f3-ac1f5ce71f73 became leader Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-dgcll to bootstrap-e2e-minion-group-g3s5 Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 2.30054303s (2.300570468s including waiting) Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container default-http-backend Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container default-http-backend Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-dgcll Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container default-http-backend Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container default-http-backend Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Liveness probe failed: Get "http://10.64.3.27:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-dgcll Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container default-http-backend Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-dgcll Jan 28 20:09:51.149: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 28 20:09:51.149: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 28 20:09:51.149: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 28 20:09:51.149: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 28 20:09:51.149: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 28 20:09:51.149: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 28 20:09:51.149: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-45m2p to bootstrap-e2e-minion-group-mh3p Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 847.414224ms (847.440914ms including waiting) Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.820556539s (1.820574424s including waiting) Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-4b9h5 to bootstrap-e2e-master Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 880.932728ms (880.940631ms including waiting) Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.873485565s (1.873503664s including waiting) Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-nsst5 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 663.380312ms (663.388707ms including waiting) Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.719868155s (1.719885142s including waiting) Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-sdzdk to bootstrap-e2e-minion-group-0n1r Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 712.939789ms (712.956274ms including waiting) Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.695636692s (1.695660104s including waiting) Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-4b9h5 Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-45m2p Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-sdzdk Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-nsst5 Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-lwrsb to bootstrap-e2e-minion-group-g3s5 Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.471766127s (3.471785385s including waiting) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.674813094s (2.674841129s including waiting) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-lwrsb Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-lwrsb Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-zddjc to bootstrap-e2e-minion-group-0n1r Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.258017443s (1.258032513s including waiting) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 935.578053ms (935.586846ms including waiting) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": dial tcp 10.64.2.3:10250: connect: connection refused Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": dial tcp 10.64.2.3:10250: connect: connection refused Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-zddjc Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.7:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.7:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Container metrics-server failed liveness probe, will be restarted Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Failed: Error: failed to get sandbox container task: no running task found: task 93118149c87c74675ce0d5095e2845a398f21d95fd8ae04827f4f38ded7adf60 not found: not found Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-zddjc_kube-system(75bf20cf-455a-48e7-8784-bd1f4f74d211) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-zddjc_kube-system(75bf20cf-455a-48e7-8784-bd1f4f74d211) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.11:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-zddjc Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.15:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.15:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-zddjc Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 1.912364661s (1.912373502s including waiting) Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(b6b28b8a-55e3-411f-8ff1-7da0eec83766) Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(b6b28b8a-55e3-411f-8ff1-7da0eec83766) Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(b6b28b8a-55e3-411f-8ff1-7da0eec83766) Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 20:09:51.149 (62ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 20:09:51.149 Jan 28 20:09:51.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 20:09:51.194 (45ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 20:09:51.194 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 20:09:51.194 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 20:09:51.194 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 20:09:51.194 STEP: Collecting events from namespace "reboot-5196". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 20:09:51.194 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/28/23 20:09:51.235 Jan 28 20:09:51.276: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 20:09:51.276: INFO: Jan 28 20:09:51.322: INFO: Logging node info for node bootstrap-e2e-master Jan 28 20:09:51.364: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 970b6f6f-4e1a-46c9-acbf-59a10a5407de 2861 0 2023-01-28 19:51:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-28 19:51:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-28 19:51:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:06:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:09 +0000 UTC,LastTransitionTime:2023-01-28 19:51:09 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:06:44 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:06:44 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:06:44 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:06:44 +0000 UTC,LastTransitionTime:2023-01-28 19:51:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.117.50,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3a4f647927569fb58286b9195c204539,SystemUUID:3a4f6479-2756-9fb5-8286-b9195c204539,BootID:8ef6f2d0-a90b-49fd-85d7-23425f9c3021,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:57552182,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:09:51.364: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 28 20:09:51.412: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 28 20:10:21.454: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: context deadline exceeded: connection error: desc = "transport: Error while dialing dial unix /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket: connect: no such file or directory" Jan 28 20:10:21.454: INFO: Logging node info for node bootstrap-e2e-minion-group-0n1r Jan 28 20:10:21.497: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0n1r 46df1b17-a913-4228-816e-be74f36b3df3 3200 0 2023-01-28 19:51:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0n1r kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 20:06:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 20:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 20:07:00 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 20:09:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-minion-group-0n1r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:23 +0000 UTC,LastTransitionTime:2023-01-28 19:51:23 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:07:00 +0000 UTC,LastTransitionTime:2023-01-28 20:07:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:07:00 +0000 UTC,LastTransitionTime:2023-01-28 20:07:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:07:00 +0000 UTC,LastTransitionTime:2023-01-28 20:07:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:07:00 +0000 UTC,LastTransitionTime:2023-01-28 20:07:00 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.127.122.120,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0n1r.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0n1r.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:270d4de2627654ef8c167cb0cf2b2d0a,SystemUUID:270d4de2-6276-54ef-8c16-7cb0cf2b2d0a,BootID:c0d6f207-96a9-4c7d-8d72-da5b063a0e50,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:10:21.497: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-0n1r Jan 28 20:10:21.544: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0n1r Jan 28 20:10:51.587: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-0n1r: error trying to reach service: context deadline exceeded: connection error: desc = "transport: Error while dialing dial unix /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket: connect: no such file or directory" Jan 28 20:10:51.587: INFO: Logging node info for node bootstrap-e2e-minion-group-g3s5 Jan 28 20:10:51.635: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-g3s5 1a727c84-81d4-4cc8-ad06-17830501909f 3202 0 2023-01-28 19:51:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-g3s5 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 20:06:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 20:07:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 20:07:24 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 20:09:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-minion-group-g3s5,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:23 +0000 UTC,LastTransitionTime:2023-01-28 19:51:23 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:07:24 +0000 UTC,LastTransitionTime:2023-01-28 20:07:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:07:24 +0000 UTC,LastTransitionTime:2023-01-28 20:07:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:07:24 +0000 UTC,LastTransitionTime:2023-01-28 20:07:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:07:24 +0000 UTC,LastTransitionTime:2023-01-28 20:07:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.145.35.125,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-g3s5.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-g3s5.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:79d547ef2c0f438965bed79c8c4eb57b,SystemUUID:79d547ef-2c0f-4389-65be-d79c8c4eb57b,BootID:1e71fdd5-d15d-46c3-bdd9-d8f3d24beb51,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:10:51.635: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-g3s5 Jan 28 20:10:51.682: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-g3s5 Jan 28 20:11:21.724: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-g3s5: error trying to reach service: context deadline exceeded: connection error: desc = "transport: Error while dialing dial unix /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket: connect: no such file or directory" Jan 28 20:11:21.724: INFO: Logging node info for node bootstrap-e2e-minion-group-mh3p Jan 28 20:11:21.766: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-mh3p 2d56d4de-a7bd-4a59-aa22-a6e8981cfd7e 3197 0 2023-01-28 19:51:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-mh3p kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 20:06:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 20:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 20:07:00 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 20:09:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-minion-group-mh3p,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 20:09:20 +0000 UTC,LastTransitionTime:2023-01-28 20:09:19 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 20:09:20 +0000 UTC,LastTransitionTime:2023-01-28 20:09:19 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 20:09:20 +0000 UTC,LastTransitionTime:2023-01-28 20:09:19 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 20:09:20 +0000 UTC,LastTransitionTime:2023-01-28 20:09:19 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 20:09:20 +0000 UTC,LastTransitionTime:2023-01-28 20:09:19 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 20:09:20 +0000 UTC,LastTransitionTime:2023-01-28 20:09:19 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 20:09:20 +0000 UTC,LastTransitionTime:2023-01-28 20:09:19 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:23 +0000 UTC,LastTransitionTime:2023-01-28 19:51:23 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:07:00 +0000 UTC,LastTransitionTime:2023-01-28 20:07:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:07:00 +0000 UTC,LastTransitionTime:2023-01-28 20:07:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:07:00 +0000 UTC,LastTransitionTime:2023-01-28 20:07:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:07:00 +0000 UTC,LastTransitionTime:2023-01-28 20:07:00 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.72.159,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-mh3p.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-mh3p.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5adcca49c54c440dcbf0f8686b780b6a,SystemUUID:5adcca49-c54c-440d-cbf0-f8686b780b6a,BootID:2f56d746-f68a-4222-9a3e-f46b29b61c33,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:11:21.766: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-mh3p Jan 28 20:11:21.812: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-mh3p Jan 28 20:11:21.862: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-mh3p: error trying to reach service: No agent available END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 20:11:21.862 (1m30.667s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 20:11:21.862 (1m30.668s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 20:11:21.862 STEP: Destroying namespace "reboot-5196" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 20:11:21.862 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 20:11:21.908 (46ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 20:11:21.909 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 20:11:21.909 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\striggering\skernel\spanic\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 20:09:51.087
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 20:07:49.386 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 20:07:49.386 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 20:07:49.386 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 20:07:49.386 Jan 28 20:07:49.386: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 20:07:49.388 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 20:07:49.519 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 20:07:49.599 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 20:07:49.679 (293ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 20:07:49.679 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 20:07:49.679 (0s) > Enter [It] each node by triggering kernel panic and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:109 @ 01/28/23 20:07:49.679 Jan 28 20:07:49.774: INFO: Getting bootstrap-e2e-minion-group-mh3p Jan 28 20:07:49.774: INFO: Getting bootstrap-e2e-minion-group-g3s5 Jan 28 20:07:49.774: INFO: Getting bootstrap-e2e-minion-group-0n1r Jan 28 20:07:49.815: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-mh3p condition Ready to be true Jan 28 20:07:49.847: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-0n1r condition Ready to be true Jan 28 20:07:49.847: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-g3s5 condition Ready to be true Jan 28 20:07:49.857: INFO: Node bootstrap-e2e-minion-group-mh3p has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-mh3p metadata-proxy-v0.1-45m2p] Jan 28 20:07:49.857: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-mh3p metadata-proxy-v0.1-45m2p] Jan 28 20:07:49.857: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-45m2p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:07:49.857: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-mh3p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:07:49.889: INFO: Node bootstrap-e2e-minion-group-0n1r has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-sdzdk kube-proxy-bootstrap-e2e-minion-group-0n1r] Jan 28 20:07:49.889: INFO: Node bootstrap-e2e-minion-group-g3s5 has 4 assigned pods with no liveness probes: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5] Jan 28 20:07:49.889: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-sdzdk kube-proxy-bootstrap-e2e-minion-group-0n1r] Jan 28 20:07:49.889: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5] Jan 28 20:07:49.889: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-0n1r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:07:49.889: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-nsst5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:07:49.889: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:07:49.889: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-tc6bx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:07:49.889: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-g3s5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:07:49.889: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-sdzdk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 20:07:49.900: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-mh3p": Phase="Running", Reason="", readiness=true. Elapsed: 42.79064ms Jan 28 20:07:49.900: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-mh3p" satisfied condition "running and ready, or succeeded" Jan 28 20:07:49.900: INFO: Pod "metadata-proxy-v0.1-45m2p": Phase="Running", Reason="", readiness=true. Elapsed: 42.842201ms Jan 28 20:07:49.900: INFO: Pod "metadata-proxy-v0.1-45m2p" satisfied condition "running and ready, or succeeded" Jan 28 20:07:49.900: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-mh3p metadata-proxy-v0.1-45m2p] Jan 28 20:07:49.900: INFO: Getting external IP address for bootstrap-e2e-minion-group-mh3p Jan 28 20:07:49.900: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-mh3p(34.168.72.159:22) Jan 28 20:07:49.934: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 45.062405ms Jan 28 20:07:49.934: INFO: Pod "kube-dns-autoscaler-5f6455f985-tc6bx": Phase="Running", Reason="", readiness=true. Elapsed: 45.003109ms Jan 28 20:07:49.934: INFO: Pod "kube-dns-autoscaler-5f6455f985-tc6bx" satisfied condition "running and ready, or succeeded" Jan 28 20:07:49.934: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 20:07:49.935: INFO: Pod "metadata-proxy-v0.1-nsst5": Phase="Running", Reason="", readiness=true. Elapsed: 46.105666ms Jan 28 20:07:49.935: INFO: Pod "metadata-proxy-v0.1-nsst5" satisfied condition "running and ready, or succeeded" Jan 28 20:07:49.935: INFO: Pod "metadata-proxy-v0.1-sdzdk": Phase="Running", Reason="", readiness=true. Elapsed: 45.923711ms Jan 28 20:07:49.935: INFO: Pod "metadata-proxy-v0.1-sdzdk" satisfied condition "running and ready, or succeeded" Jan 28 20:07:49.935: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g3s5": Phase="Running", Reason="", readiness=true. Elapsed: 46.01751ms Jan 28 20:07:49.935: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g3s5" satisfied condition "running and ready, or succeeded" Jan 28 20:07:49.935: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-tc6bx kube-proxy-bootstrap-e2e-minion-group-g3s5 metadata-proxy-v0.1-nsst5] Jan 28 20:07:49.935: INFO: Getting external IP address for bootstrap-e2e-minion-group-g3s5 Jan 28 20:07:49.935: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-g3s5(34.145.35.125:22) Jan 28 20:07:49.935: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0n1r": Phase="Running", Reason="", readiness=true. Elapsed: 46.224157ms Jan 28 20:07:49.935: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0n1r" satisfied condition "running and ready, or succeeded" Jan 28 20:07:49.935: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-sdzdk kube-proxy-bootstrap-e2e-minion-group-0n1r] Jan 28 20:07:49.935: INFO: Getting external IP address for bootstrap-e2e-minion-group-0n1r Jan 28 20:07:49.935: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-0n1r(34.127.122.120:22) Jan 28 20:07:50.433: INFO: ssh prow@34.168.72.159:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 28 20:07:50.433: INFO: ssh prow@34.168.72.159:22: stdout: "" Jan 28 20:07:50.433: INFO: ssh prow@34.168.72.159:22: stderr: "" Jan 28 20:07:50.433: INFO: ssh prow@34.168.72.159:22: exit code: 0 Jan 28 20:07:50.433: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-mh3p condition Ready to be false Jan 28 20:07:50.456: INFO: ssh prow@34.127.122.120:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 28 20:07:50.456: INFO: ssh prow@34.127.122.120:22: stdout: "" Jan 28 20:07:50.456: INFO: ssh prow@34.127.122.120:22: stderr: "" Jan 28 20:07:50.456: INFO: ssh prow@34.127.122.120:22: exit code: 0 Jan 28 20:07:50.456: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-0n1r condition Ready to be false Jan 28 20:07:50.468: INFO: ssh prow@34.145.35.125:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 28 20:07:50.468: INFO: ssh prow@34.145.35.125:22: stdout: "" Jan 28 20:07:50.468: INFO: ssh prow@34.145.35.125:22: stderr: "" Jan 28 20:07:50.468: INFO: ssh prow@34.145.35.125:22: exit code: 0 Jan 28 20:07:50.468: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-g3s5 condition Ready to be false Jan 28 20:07:50.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:50.498: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:50.510: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:52.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:52.541: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:52.553: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:54.564: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:54.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:54.601: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:56.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:56.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:56.643: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:58.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:58.683: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:07:58.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:00.692: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:00.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:00.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:02.736: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:02.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:02.770: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:04.780: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:04.810: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:04.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:06.822: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:06.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:06.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:08.865: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:08.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:08.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:10.908: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:10.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:10.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:12.950: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:13.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:13.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:14.993: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:15.054: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:15.054: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:17.039: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:17.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:17.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:19.081: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:19.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:19.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:21.121: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:21.184: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:21.184: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:23.162: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:23.224: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:23.224: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:25.202: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:25.264: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:25.264: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:27.241: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:27.304: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:27.304: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:29.282: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:29.344: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:29.344: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:31.321: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:31.384: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:31.384: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:33.361: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:33.424: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:33.424: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:35.401: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:35.464: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:35.464: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:37.442: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:37.504: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:37.504: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:39.483: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:39.544: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:39.544: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:41.523: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:41.584: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:41.584: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:43.563: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:43.624: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:43.624: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:45.603: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:45.664: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:45.664: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:47.644: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:47.705: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:47.705: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:49.685: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:49.744: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:49.745: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:51.725: INFO: Couldn't get node bootstrap-e2e-minion-group-mh3p Jan 28 20:08:51.785: INFO: Couldn't get node bootstrap-e2e-minion-group-0n1r Jan 28 20:08:51.785: INFO: Couldn't get node bootstrap-e2e-minion-group-g3s5 Jan 28 20:08:57.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:57.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:57.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:59.855: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:59.856: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:08:59.857: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:01.904: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:01.904: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:01.904: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:03.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:03.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:03.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:06.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:06.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:06.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:08.048: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:08.048: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:08.048: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:10.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:10.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:10.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:12.179: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:12.179: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:12.179: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:14.224: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:14.224: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:14.225: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:16.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:16.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:16.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:18.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:18.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:18.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:20.415: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:20.415: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:20.415: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:22.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:22.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:22.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:24.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:24.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:24.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:26.561: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:26.561: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:26.561: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:28.609: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:28.609: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:28.609: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:30.656: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:30.656: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:30.657: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:32.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:32.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:32.704: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:34.751: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:34.751: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:34.751: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:36.797: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:36.797: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:36.798: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:38.844: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:38.844: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:38.844: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:40.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:40.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:40.896: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:42.943: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:42.943: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:42.943: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:44.989: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:44.989: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:44.990: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:47.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:47.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:47.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:49.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-mh3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:49.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-g3s5 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:49.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-0n1r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 20:09:51.087: INFO: Node bootstrap-e2e-minion-group-0n1r didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:09:51.087: INFO: Node bootstrap-e2e-minion-group-mh3p didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:09:51.087: INFO: Node bootstrap-e2e-minion-group-g3s5 didn't reach desired Ready condition status (false) within 2m0s Jan 28 20:09:51.087: INFO: Node bootstrap-e2e-minion-group-0n1r failed reboot test. Jan 28 20:09:51.087: INFO: Node bootstrap-e2e-minion-group-g3s5 failed reboot test. Jan 28 20:09:51.087: INFO: Node bootstrap-e2e-minion-group-mh3p failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 20:09:51.087 < Exit [It] each node by triggering kernel panic and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:109 @ 01/28/23 20:09:51.087 (2m1.408s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 20:09:51.087 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 20:09:51.087 Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-5f95b to bootstrap-e2e-minion-group-mh3p Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 988.64865ms (988.660887ms including waiting) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-5f95b Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-5f95b_kube-system(d963f1ba-8d39-4169-912a-3ea2b305ba4d) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: Get "http://10.64.1.11:8181/ready": dial tcp 10.64.1.11:8181: connect: connection refused Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Readiness probe failed: Get "http://10.64.1.13:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-5f95b: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-5f95b Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-zkf5q to bootstrap-e2e-minion-group-g3s5 Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 4.754015323s (4.754025827s including waiting) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-zkf5q_kube-system(bc56bd34-3571-4e4b-abe7-beb82134f4e9) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-zkf5q Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:8181/ready": dial tcp 10.64.3.24:8181: connect: connection refused Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-zkf5q_kube-system(bc56bd34-3571-4e4b-abe7-beb82134f4e9) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.30:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.34:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-zkf5q Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container coredns Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-zkf5q_kube-system(bc56bd34-3571-4e4b-abe7-beb82134f4e9) Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f-zkf5q: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: Get "http://10.64.3.46:8181/ready": dial tcp 10.64.3.46:8181: connect: connection refused Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-zkf5q Jan 28 20:09:51.148: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-5f95b Jan 28 20:09:51.148: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 20:09:51.148: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 20:09:51.148: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 20:09:51.148: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 20:09:51.148: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 20:09:51.148: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 20:09:51.148: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(29ec3e483e58679ee5f59a6031c5e501) Jan 28 20:09:51.148: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 28 20:09:51.148: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 20:09:51.148: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 20:09:51.148: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 20:09:51.148: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_513c5 became leader Jan 28 20:09:51.148: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1b6de became leader Jan 28 20:09:51.148: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_16a51 became leader Jan 28 20:09:51.148: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_aecb1 became leader Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-6x7kx to bootstrap-e2e-minion-group-mh3p Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 638.516592ms (638.533876ms including waiting) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6x7kx_kube-system(ed70439e-4bcd-45f3-ab80-c3443614cb7f) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6x7kx_kube-system(ed70439e-4bcd-45f3-ab80-c3443614cb7f) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Liveness probe failed: Get "http://10.64.1.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} Unhealthy: Liveness probe failed: Get "http://10.64.1.14:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-6x7kx: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-qb4t9 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.519410591s (2.519418935s including waiting) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-qb4t9_kube-system(c535b342-76b5-479d-8f04-e96ca247dfe5) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Liveness probe failed: Get "http://10.64.3.26:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Failed: Error: failed to get sandbox container task: no running task found: task cc5844e86e91665c11906665c81f3d4c5211312c2df4be494c37e0261f046d15 not found: not found Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-qb4t9_kube-system(c535b342-76b5-479d-8f04-e96ca247dfe5) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Liveness probe failed: Get "http://10.64.3.33:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-qb4t9: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-xvpcb to bootstrap-e2e-minion-group-0n1r Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 636.231986ms (636.24567ms including waiting) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": dial tcp 10.64.2.2:8093: connect: connection refused Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.8:8093/healthz": dial tcp 10.64.2.8:8093: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "http://10.64.2.8:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-xvpcb_kube-system(989c550e-f120-4c1b-9c3a-6df4b3fdde4c) Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent-xvpcb: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container konnectivity-agent Jan 28 20:09:51.148: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-qb4t9 Jan 28 20:09:51.148: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-xvpcb Jan 28 20:09:51.148: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-6x7kx Jan 28 20:09:51.148: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 20:09:51.148: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 20:09:51.148: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 28 20:09:51.148: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 28 20:09:51.148: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 28 20:09:51.148: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 28 20:09:51.148: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 28 20:09:51.148: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 28 20:09:51.148: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 28 20:09:51.148: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:09:51.148: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 28 20:09:51.148: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 20:09:51.148: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 20:09:51.148: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.148: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 28 20:09:51.148: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 28 20:09:51.148: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 28 20:09:51.148: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(f70ce176158303a9ebd031d7e3b6127a) Jan 28 20:09:51.148: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_3195f2fa-43b4-44c6-99b9-48340126a997 became leader Jan 28 20:09:51.148: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_79df5a90-5f1c-4226-91be-48b6f9dbf1b4 became leader Jan 28 20:09:51.148: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_de5cb362-ceae-4fe2-9999-2c22c1c438c2 became leader Jan 28 20:09:51.148: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2052b0a5-4de3-41f7-abae-084298efc321 became leader Jan 28 20:09:51.148: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_35a816ba-3468-4255-96ae-1484bc9888a9 became leader Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-tc6bx to bootstrap-e2e-minion-group-g3s5 Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 5.225574521s (5.225582217s including waiting) Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container autoscaler Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container autoscaler Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container autoscaler Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-tc6bx_kube-system(68e7acff-d47c-41a3-999e-81f6e6886b77) Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-tc6bx Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container autoscaler Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container autoscaler Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container autoscaler Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-tc6bx_kube-system(68e7acff-d47c-41a3-999e-81f6e6886b77) Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-tc6bx Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container autoscaler Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985-tc6bx: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container autoscaler Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-tc6bx Jan 28 20:09:51.148: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-0n1r_kube-system(9b011e80d8dc05f3f14727717fa821a7) Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0n1r: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-g3s5_kube-system(926ffa386cd1d6d2268581c1ed0b2f8c) Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container kube-proxy Jan 28 20:09:51.148: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-g3s5_kube-system(926ffa386cd1d6d2268581c1ed0b2f8c) Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g3s5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-mh3p_kube-system(b150875e2fb427d0806b8243d6a9b58f) Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Killing: Stopping container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-mh3p_kube-system(b150875e2fb427d0806b8243d6a9b58f) Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-proxy-bootstrap-e2e-minion-group-mh3p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container kube-proxy Jan 28 20:09:51.149: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 20:09:51.149: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 28 20:09:51.149: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 28 20:09:51.149: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 28 20:09:51.149: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(51babbd1f81b742b53c210ccd4aba348) Jan 28 20:09:51.149: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_6d3679c9-8b91-439b-8dd5-7d1b052b0f95 became leader Jan 28 20:09:51.149: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_97f512eb-1061-47dc-9e27-98f52ceebe45 became leader Jan 28 20:09:51.149: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_75e50ff1-aee4-4d42-a84f-b94251206449 became leader Jan 28 20:09:51.149: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_a91b85d9-8fac-4b8a-83f3-ac1f5ce71f73 became leader Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-dgcll to bootstrap-e2e-minion-group-g3s5 Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 2.30054303s (2.300570468s including waiting) Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container default-http-backend Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container default-http-backend Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-dgcll Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container default-http-backend Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container default-http-backend Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Liveness probe failed: Get "http://10.64.3.27:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-dgcll Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99-dgcll: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container default-http-backend Jan 28 20:09:51.149: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-dgcll Jan 28 20:09:51.149: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 28 20:09:51.149: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 28 20:09:51.149: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 28 20:09:51.149: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 28 20:09:51.149: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 28 20:09:51.149: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 28 20:09:51.149: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-45m2p to bootstrap-e2e-minion-group-mh3p Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 847.414224ms (847.440914ms including waiting) Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.820556539s (1.820574424s including waiting) Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-45m2p: {kubelet bootstrap-e2e-minion-group-mh3p} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-4b9h5 to bootstrap-e2e-master Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 880.932728ms (880.940631ms including waiting) Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.873485565s (1.873503664s including waiting) Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-4b9h5: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-nsst5 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 663.380312ms (663.388707ms including waiting) Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.719868155s (1.719885142s including waiting) Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-nsst5: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-sdzdk to bootstrap-e2e-minion-group-0n1r Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 712.939789ms (712.956274ms including waiting) Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.695636692s (1.695660104s including waiting) Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metadata-proxy Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1-sdzdk: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container prometheus-to-sd-exporter Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-4b9h5 Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-45m2p Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-sdzdk Jan 28 20:09:51.149: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-nsst5 Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-lwrsb to bootstrap-e2e-minion-group-g3s5 Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.471766127s (3.471785385s including waiting) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.674813094s (2.674841129s including waiting) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c-lwrsb: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-lwrsb Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-lwrsb Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-zddjc to bootstrap-e2e-minion-group-0n1r Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.258017443s (1.258032513s including waiting) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 935.578053ms (935.586846ms including waiting) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": dial tcp 10.64.2.3:10250: connect: connection refused Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": dial tcp 10.64.2.3:10250: connect: connection refused Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-zddjc Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.7:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.7:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Stopping container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Killing: Container metrics-server failed liveness probe, will be restarted Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Failed: Error: failed to get sandbox container task: no running task found: task 93118149c87c74675ce0d5095e2845a398f21d95fd8ae04827f4f38ded7adf60 not found: not found Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-zddjc_kube-system(75bf20cf-455a-48e7-8784-bd1f4f74d211) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-zddjc_kube-system(75bf20cf-455a-48e7-8784-bd1f4f74d211) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.11:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-zddjc Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Created: Created container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Started: Started container metrics-server-nanny Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: Get "https://10.64.2.15:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Liveness probe failed: Get "https://10.64.2.15:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9-zddjc: {kubelet bootstrap-e2e-minion-group-0n1r} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-zddjc Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 28 20:09:51.149: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-g3s5 Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 1.912364661s (1.912373502s including waiting) Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(b6b28b8a-55e3-411f-8ff1-7da0eec83766) Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(b6b28b8a-55e3-411f-8ff1-7da0eec83766) Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Created: Created container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Started: Started container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} Killing: Stopping container volume-snapshot-controller Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-g3s5} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(b6b28b8a-55e3-411f-8ff1-7da0eec83766) Jan 28 20:09:51.149: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 20:09:51.149 (62ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 20:09:51.149 Jan 28 20:09:51.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 20:09:51.194 (45ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 20:09:51.194 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 20:09:51.194 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 20:09:51.194 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 20:09:51.194 STEP: Collecting events from namespace "reboot-5196". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 20:09:51.194 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/28/23 20:09:51.235 Jan 28 20:09:51.276: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 20:09:51.276: INFO: Jan 28 20:09:51.322: INFO: Logging node info for node bootstrap-e2e-master Jan 28 20:09:51.364: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 970b6f6f-4e1a-46c9-acbf-59a10a5407de 2861 0 2023-01-28 19:51:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-28 19:51:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-28 19:51:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 20:06:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:09 +0000 UTC,LastTransitionTime:2023-01-28 19:51:09 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:06:44 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:06:44 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:06:44 +0000 UTC,LastTransitionTime:2023-01-28 19:51:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:06:44 +0000 UTC,LastTransitionTime:2023-01-28 19:51:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.117.50,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3a4f647927569fb58286b9195c204539,SystemUUID:3a4f6479-2756-9fb5-8286-b9195c204539,BootID:8ef6f2d0-a90b-49fd-85d7-23425f9c3021,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:57552182,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 20:09:51.364: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 28 20:09:51.412: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 28 20:10:21.454: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: context deadline exceeded: connection error: desc = "transport: Error while dialing dial unix /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket: connect: no such file or directory" Jan 28 20:10:21.454: INFO: Logging node info for node bootstrap-e2e-minion-group-0n1r Jan 28 20:10:21.497: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0n1r 46df1b17-a913-4228-816e-be74f36b3df3 3200 0 2023-01-28 19:51:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0n1r kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 19:51:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 20:06:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 20:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 20:07:00 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 20:09:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-west1-b/bootstrap-e2e-minion-group-0n1r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 20:09:21 +0000 UTC,LastTransitionTime:2023-01-28 20:09:20 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 19:51:23 +0000 UTC,LastTransitionTime:2023-01-28 19:51:23 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 20:07:00 +0000 UTC,LastTransitionTime:2023-01-28 20:07:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 20:07:00 +0000 UTC,LastTransitionTime:2023-01-28 20:07:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 20:07:00 +0000 UTC,LastTransitionTime:2023-01-28 20:07:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 20:07:00 +0000 UTC,LastTransitionTime:2023-01-28 20:07:00 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.127.122.120,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0n1r.c.k8s-boskos-gce-project-01.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0n1r.c.k8s-boskos-gce-project-01.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:270d4de2627654ef8c167cb0cf2b2d0a,SystemUUID:270d4de2-6276-54ef-8c16-7cb0cf2b2d0a,BootID:c0d6f207-96a9-4c7d-8d72-da5b063a0e50,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2