go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 19:19:37.894from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 19:12:29.48 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 19:12:29.48 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 19:12:29.48 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 19:12:29.48 Jan 29 19:12:29.480: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 19:12:29.481 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 19:12:29.606 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 19:12:29.686 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 19:12:29.766 (287ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 19:12:29.766 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 19:12:29.766 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 19:12:29.766 Jan 29 19:12:29.912: INFO: Getting bootstrap-e2e-minion-group-zmlw Jan 29 19:12:29.912: INFO: Getting bootstrap-e2e-minion-group-kbdq Jan 29 19:12:29.912: INFO: Getting bootstrap-e2e-minion-group-6j12 Jan 29 19:12:29.955: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-6j12 condition Ready to be true Jan 29 19:12:29.955: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-kbdq condition Ready to be true Jan 29 19:12:29.955: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-zmlw condition Ready to be true Jan 29 19:12:29.998: INFO: Node bootstrap-e2e-minion-group-kbdq has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 19:12:29.998: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 19:12:29.998: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-sxj7d" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:12:29.998: INFO: Node bootstrap-e2e-minion-group-6j12 has 3 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12 metadata-proxy-v0.1-69vb9] Jan 29 19:12:29.998: INFO: Waiting up to 5m0s for 3 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12 metadata-proxy-v0.1-69vb9] Jan 29 19:12:29.998: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-69vb9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:12:29.998: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-kbdq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:12:29.999: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-sqslx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:12:29.999: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-6j12" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:12:29.999: INFO: Node bootstrap-e2e-minion-group-zmlw has 3 assigned pods with no liveness probes: [metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0 kube-proxy-bootstrap-e2e-minion-group-zmlw] Jan 29 19:12:29.999: INFO: Waiting up to 5m0s for 3 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0 kube-proxy-bootstrap-e2e-minion-group-zmlw] Jan 29 19:12:29.999: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-zmlw" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:12:29.999: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-k4wx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:12:29.999: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:12:30.042: INFO: Pod "metadata-proxy-v0.1-sxj7d": Phase="Running", Reason="", readiness=true. Elapsed: 43.330371ms Jan 29 19:12:30.042: INFO: Pod "metadata-proxy-v0.1-sxj7d" satisfied condition "running and ready, or succeeded" Jan 29 19:12:30.045: INFO: Pod "kube-dns-autoscaler-5f6455f985-sqslx": Phase="Running", Reason="", readiness=true. Elapsed: 46.591724ms Jan 29 19:12:30.045: INFO: Pod "kube-dns-autoscaler-5f6455f985-sqslx" satisfied condition "running and ready, or succeeded" Jan 29 19:12:30.045: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.291521ms Jan 29 19:12:30.045: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:30.045: INFO: Pod "metadata-proxy-v0.1-69vb9": Phase="Running", Reason="", readiness=true. Elapsed: 46.837808ms Jan 29 19:12:30.045: INFO: Pod "metadata-proxy-v0.1-69vb9" satisfied condition "running and ready, or succeeded" Jan 29 19:12:30.048: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kbdq": Phase="Running", Reason="", readiness=true. Elapsed: 49.04646ms Jan 29 19:12:30.048: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kbdq" satisfied condition "running and ready, or succeeded" Jan 29 19:12:30.048: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 19:12:30.048: INFO: Getting external IP address for bootstrap-e2e-minion-group-kbdq Jan 29 19:12:30.048: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-kbdq(34.168.183.142:22) Jan 29 19:12:30.048: INFO: Pod "metadata-proxy-v0.1-k4wx2": Phase="Running", Reason="", readiness=true. Elapsed: 48.795567ms Jan 29 19:12:30.048: INFO: Pod "metadata-proxy-v0.1-k4wx2" satisfied condition "running and ready, or succeeded" Jan 29 19:12:30.048: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=false. Elapsed: 48.956074ms Jan 29 19:12:30.048: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-zmlw' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:52 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:52 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC }] Jan 29 19:12:30.048: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6j12": Phase="Running", Reason="", readiness=true. Elapsed: 49.407798ms Jan 29 19:12:30.048: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6j12" satisfied condition "running and ready, or succeeded" Jan 29 19:12:30.048: INFO: Wanted all 3 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12 metadata-proxy-v0.1-69vb9] Jan 29 19:12:30.048: INFO: Getting external IP address for bootstrap-e2e-minion-group-6j12 Jan 29 19:12:30.048: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-6j12(34.82.40.177:22) Jan 29 19:12:30.564: INFO: ssh prow@34.168.183.142:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 19:12:30.564: INFO: ssh prow@34.168.183.142:22: stdout: "" Jan 29 19:12:30.564: INFO: ssh prow@34.168.183.142:22: stderr: "" Jan 29 19:12:30.564: INFO: ssh prow@34.168.183.142:22: exit code: 0 Jan 29 19:12:30.564: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-kbdq condition Ready to be false Jan 29 19:12:30.567: INFO: ssh prow@34.82.40.177:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 19:12:30.567: INFO: ssh prow@34.82.40.177:22: stdout: "" Jan 29 19:12:30.567: INFO: ssh prow@34.82.40.177:22: stderr: "" Jan 29 19:12:30.567: INFO: ssh prow@34.82.40.177:22: exit code: 0 Jan 29 19:12:30.567: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-6j12 condition Ready to be false Jan 29 19:12:30.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:30.609: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:32.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.088243397s Jan 29 19:12:32.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:32.091: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=false. Elapsed: 2.091848596s Jan 29 19:12:32.091: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-zmlw' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:52 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:52 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC }] Jan 29 19:12:32.649: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:32.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:34.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.088146997s Jan 29 19:12:34.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:34.090: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=false. Elapsed: 4.090891999s Jan 29 19:12:34.090: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-zmlw' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:52 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:52 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC }] Jan 29 19:12:34.692: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:34.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:36.108: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.109078768s Jan 29 19:12:36.108: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:36.109: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=true. Elapsed: 6.110327013s Jan 29 19:12:36.109: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw" satisfied condition "running and ready, or succeeded" Jan 29 19:12:36.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:36.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:38.090: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.091511588s Jan 29 19:12:38.090: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:38.778: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:38.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:40.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.088199939s Jan 29 19:12:40.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:40.823: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:40.826: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:42.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.08851299s Jan 29 19:12:42.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:42.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:42.869: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:44.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.087901123s Jan 29 19:12:44.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:44.909: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:44.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:46.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.087971624s Jan 29 19:12:46.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:46.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:46.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:48.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.087964908s Jan 29 19:12:48.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:48.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:48.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:50.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.088087104s Jan 29 19:12:50.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:51.042: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:51.043: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:52.088: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.089426003s Jan 29 19:12:52.088: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:53.085: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:53.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:54.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.088090621s Jan 29 19:12:54.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:55.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:55.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:56.088: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.088691305s Jan 29 19:12:56.088: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:57.175: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:57.176: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:58.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.088463773s Jan 29 19:12:58.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:59.226: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:59.226: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:00.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 30.088221202s Jan 29 19:13:00.087: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 19:13:00.087: INFO: Wanted all 3 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0 kube-proxy-bootstrap-e2e-minion-group-zmlw] Jan 29 19:13:00.087: INFO: Getting external IP address for bootstrap-e2e-minion-group-zmlw Jan 29 19:13:00.087: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-zmlw(35.185.251.137:22) Jan 29 19:13:00.599: INFO: ssh prow@35.185.251.137:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 19:13:00.599: INFO: ssh prow@35.185.251.137:22: stdout: "" Jan 29 19:13:00.599: INFO: ssh prow@35.185.251.137:22: stderr: "" Jan 29 19:13:00.599: INFO: ssh prow@35.185.251.137:22: exit code: 0 Jan 29 19:13:00.599: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-zmlw condition Ready to be false Jan 29 19:13:00.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:01.270: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:01.270: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:02.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:03.313: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:03.313: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:04.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:05.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:05.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:06.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:07.401: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:07.401: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:08.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:09.450: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:09.450: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:10.869: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:11.494: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:11.494: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:12.927: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:13.538: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:13.538: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:14.970: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:15.582: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:15.582: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:17.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:17.627: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-6j12 condition Ready to be true Jan 29 19:13:17.627: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:17.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:13:19.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:19.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:19.713: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:13:21.098: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:21.713: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:21.756: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:13:23.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:23.757: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-kbdq condition Ready to be true Jan 29 19:13:23.799: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:23.799: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:13:25.183: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:25.844: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:13:25.844: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:27.226: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:27.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:13:27.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:29.269: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:29.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:13:29.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:31.312: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:31.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:13:31.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:33.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:34.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:34.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:13:35.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:36.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:13:36.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:37.474: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:38.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:13:38.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:39.517: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:40.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:40.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:13:41.561: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:42.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:42.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:13:43.602: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:13:44.235: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:13:44.235: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:13:45.642: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:13:46.275: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:13:46.275: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:13:47.682: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:13:48.315: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:13:48.315: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:13:49.722: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:13:50.355: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:13:50.355: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:13:51.762: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:13:52.395: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:13:52.395: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:13:53.803: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:13:54.434: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:13:54.435: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:13:55.843: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:13:56.475: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:13:56.475: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:13:57.884: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:13:58.514: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:13:58.514: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:13:59.924: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:00.554: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:00.555: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:01.965: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:02.595: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:02.595: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:04.006: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:04.635: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:04.635: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:06.047: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:06.676: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:06.676: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:08.087: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:08.716: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:08.716: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:10.127: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:10.756: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:10.756: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:12.167: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:12.796: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:12.796: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:14.207: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:14.836: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:14.836: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:16.247: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:16.876: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:16.876: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:18.287: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:18.915: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:18.915: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:20.328: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:20.955: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:20.955: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:22.369: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:22.995: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:22.995: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:24.409: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:25.035: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:25.035: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:26.449: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:27.075: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:27.075: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:28.489: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:29.115: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:29.115: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:30.529: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:31.156: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:31.156: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:32.569: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:33.196: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:33.196: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:34.609: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:35.236: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:35.236: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:36.649: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:37.276: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:37.276: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:38.689: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:39.315: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:39.315: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:40.731: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:41.356: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:41.356: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:42.771: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:48.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:14:48.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:14:48.232: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-zmlw condition Ready to be true Jan 29 19:14:48.550: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:14:50.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:14:50.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:14:50.594: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:14:52.328: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:14:52.328: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:14:52.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:14:54.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:14:54.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:14:54.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:14:56.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:14:56.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:14:56.724: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:14:58.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:14:58.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:14:58.766: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:00.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:00.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:00.810: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:02.555: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:02.555: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:02.853: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:04.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:04.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:04.896: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:06.644: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:06.644: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:06.939: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:08.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:08.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:08.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:10.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:10.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:11.028: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:12.794: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:12.795: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:13.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:14.839: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:14.839: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:15.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:16.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:16.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:17.157: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:18.928: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:18.928: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:19.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:20.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:20.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:21.243: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:23.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:23.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:23.286: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:25.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:25.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:25.330: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:27.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:27.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:27.374: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:29.155: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:29.155: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:29.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:31.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:31.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:31.459: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:33.246: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:33.246: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:33.503: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:35.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:35.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:35.546: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:37.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:37.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:37.590: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:39.381: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:39.382: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:39.634: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:41.424: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:41.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:41.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:43.468: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:43.469: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:43.720: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:45.510: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:45.512: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:45.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:47.554: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:47.557: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:47.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:49.596: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:49.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:49.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:51.640: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:51.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:51.893: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:53.683: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:53.685: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:53.936: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:55.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:55.743: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:55.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:57.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:57.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:58.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:59.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:59.831: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:00.066: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:01.873: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:01.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:02.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:03.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:03.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:04.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:05.965: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:05.965: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:06.197: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:08.011: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:08.011: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:08.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:10.055: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:10.055: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:10.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:12.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:12.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:12.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:14.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:14.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:14.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:16.191: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:16.191: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:16.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:18.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:18.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:18.458: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:20.275: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:20.275: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:20.498: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:22.315: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:22.315: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:22.538: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:24.356: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:24.356: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:24.579: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:26.395: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:26.395: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:26.619: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:28.435: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:28.435: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:28.660: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:30.477: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:30.477: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:30.700: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:32.517: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:32.517: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:32.740: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:34.558: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:34.558: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:34.781: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:36.598: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:36.598: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:36.821: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:38.638: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:38.638: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:38.861: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:40.678: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:40.678: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:40.901: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:42.718: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:42.718: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:42.941: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:44.758: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:44.758: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:44.981: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:46.798: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:46.798: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:47.021: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:48.838: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:48.838: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:49.061: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:50.878: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:50.878: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:51.101: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:52.919: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:52.919: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:53.141: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:54.960: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:54.960: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:55.181: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:57.001: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:57.001: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:57.223: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:59.041: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:59.041: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:59.263: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:01.081: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:01.081: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:01.304: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:03.121: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:03.121: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:03.344: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:05.161: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:05.161: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:05.385: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:07.201: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:07.201: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:07.425: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:09.241: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:09.241: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:09.465: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:11.281: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:11.281: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:11.506: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:13.321: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:13.321: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:13.546: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:15.362: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:15.362: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:15.587: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:17.402: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:17.402: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:17.627: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:19.443: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:19.443: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:19.667: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:21.484: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:21.484: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:21.708: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:23.524: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:23.524: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:23.748: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:25.564: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:25.564: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:25.788: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:27.603: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:27.603: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:27.828: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:29.644: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:29.644: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 5m0.287s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 5m0.001s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 8550 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc0014320c0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fb0e5d081b8?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fb0e5d081b8?, 0xc000e91600}, {0x8147108?, 0xc0015b4820}, {0x7813648, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7fb0e5d081b8?, 0xc000e91600?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc000e91600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 8553 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b483c0, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b483c0, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 goroutine 8554 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b48d00, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b48d00, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x2) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 goroutine 8552 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc00100df80, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc00100df80, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 19:17:29.868: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:31.683: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:31.683: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:31.908: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:33.723: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:33.723: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:33.949: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:35.764: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:35.764: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:35.989: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:37.805: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:37.805: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:38.030: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:39.845: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:39.845: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:40.069: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:41.884: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:41.884: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:42.109: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:43.925: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:43.925: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:44.150: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:45.965: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:45.965: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:46.191: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:48.005: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:48.005: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:48.231: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 5m20.29s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 5m20.003s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 8550 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc0014320c0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fb0e5d081b8?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fb0e5d081b8?, 0xc000e91600}, {0x8147108?, 0xc0015b4820}, {0x7813648, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7fb0e5d081b8?, 0xc000e91600?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc000e91600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 8553 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b483c0, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b483c0, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 goroutine 8554 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b48d00, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b48d00, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x2) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 goroutine 8552 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc00100df80, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc00100df80, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 19:17:50.045: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:50.045: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:50.271: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:52.086: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:52.086: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:52.311: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:54.126: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:54.126: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:54.351: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:56.167: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:56.167: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:56.391: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:58.207: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:58.207: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:58.431: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:00.248: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:00.248: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:00.471: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:02.289: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:02.289: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:02.511: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:04.331: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:04.331: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:04.552: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:06.370: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:06.371: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:06.592: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:08.410: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:08.410: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:08.632: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 5m40.292s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 5m40.005s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 8550 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0014320c0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fb0e5d081b8?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fb0e5d081b8?, 0xc000e91600}, {0x8147108?, 0xc0015b4820}, {0x7813648, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7fb0e5d081b8?, 0xc000e91600?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc000e91600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 8553 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b483c0, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b483c0, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 goroutine 8554 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b48d00, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b48d00, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x2) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 goroutine 8552 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc00100df80, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc00100df80, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 19:18:10.450: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:10.450: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:10.673: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:12.492: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:12.492: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:12.713: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:14.532: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:14.532: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:14.753: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:16.572: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:16.572: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:16.793: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:18.572: INFO: Node bootstrap-e2e-minion-group-6j12 didn't reach desired Ready condition status (true) within 5m0s Jan 29 19:18:18.612: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:18.833: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:24.311: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:24.311: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:18:26.312: INFO: Node bootstrap-e2e-minion-group-kbdq didn't reach desired Ready condition status (true) within 5m0s Jan 29 19:18:26.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:28.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 6m0.293s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 6m0.006s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 8550 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0014320c0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fb0e5d081b8?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fb0e5d081b8?, 0xc000e91600}, {0x8147108?, 0xc0015b4820}, {0x7813648, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7fb0e5d081b8?, 0xc000e91600?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc000e91600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 8554 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b48d00, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b48d00, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x2) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 19:18:30.441: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:32.485: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:34.527: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:36.571: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:38.613: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:40.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:42.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:44.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:46.788: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:48.831: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 6m20.295s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 6m20.008s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 8550 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0014320c0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fb0e5d081b8?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fb0e5d081b8?, 0xc000e91600}, {0x8147108?, 0xc0015b4820}, {0x7813648, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7fb0e5d081b8?, 0xc000e91600?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc000e91600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 8554 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b48d00, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b48d00, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x2) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 19:18:50.876: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:52.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:54.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:57.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:59.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:01.092: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:03.135: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:05.178: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:07.221: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:09.264: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 6m40.296s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 6m40.009s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 8550 [semacquire, 7 minutes] sync.runtime_Semacquire(0xc0014320c0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fb0e5d081b8?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fb0e5d081b8?, 0xc000e91600}, {0x8147108?, 0xc0015b4820}, {0x7813648, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7fb0e5d081b8?, 0xc000e91600?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc000e91600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 8554 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b48d00, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b48d00, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x2) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 19:19:11.306: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:13.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:15.392: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:17.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:19.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:21.522: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:23.564: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:25.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:27.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:29.694: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 7m0.298s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 7m0.011s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 8550 [semacquire, 7 minutes] sync.runtime_Semacquire(0xc0014320c0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fb0e5d081b8?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fb0e5d081b8?, 0xc000e91600}, {0x8147108?, 0xc0015b4820}, {0x7813648, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7fb0e5d081b8?, 0xc000e91600?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc000e91600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 8554 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b48d00, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b48d00, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x2) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 19:19:31.749: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:33.800: INFO: Waiting up to 5m0s for 3 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0 kube-proxy-bootstrap-e2e-minion-group-zmlw] Jan 29 19:19:33.800: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-zmlw" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:19:33.800: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-k4wx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:19:33.800: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:19:33.847: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 47.271481ms Jan 29 19:19:33.847: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:13:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:13:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:19:33.848: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=false. Elapsed: 48.583406ms Jan 29 19:19:33.848: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-zmlw' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:13:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:12:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC }] Jan 29 19:19:33.849: INFO: Pod "metadata-proxy-v0.1-k4wx2": Phase="Running", Reason="", readiness=false. Elapsed: 48.661602ms Jan 29 19:19:33.849: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-k4wx2' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:13:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:05:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:03 +0000 UTC }] Jan 29 19:19:35.894: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.093737715s Jan 29 19:19:35.894: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:13:42 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:19:33 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:19:35.895: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=false. Elapsed: 2.095024279s Jan 29 19:19:35.895: INFO: Pod "metadata-proxy-v0.1-k4wx2": Phase="Running", Reason="", readiness=false. Elapsed: 2.094964471s Jan 29 19:19:35.895: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-zmlw' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:13:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:12:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC }] Jan 29 19:19:35.895: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-k4wx2' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:13:42 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:19:33 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:03 +0000 UTC }] Jan 29 19:19:37.891: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 4.090762355s Jan 29 19:19:37.891: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 19:19:37.892: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=true. Elapsed: 4.092197085s Jan 29 19:19:37.892: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw" satisfied condition "running and ready, or succeeded" Jan 29 19:19:37.894: INFO: Pod "metadata-proxy-v0.1-k4wx2": Phase="Running", Reason="", readiness=true. Elapsed: 4.094237701s Jan 29 19:19:37.894: INFO: Pod "metadata-proxy-v0.1-k4wx2" satisfied condition "running and ready, or succeeded" Jan 29 19:19:37.894: INFO: Wanted all 3 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0 kube-proxy-bootstrap-e2e-minion-group-zmlw] Jan 29 19:19:37.894: INFO: Reboot successful on node bootstrap-e2e-minion-group-zmlw Jan 29 19:19:37.894: INFO: Node bootstrap-e2e-minion-group-6j12 failed reboot test. Jan 29 19:19:37.894: INFO: Node bootstrap-e2e-minion-group-kbdq failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 19:19:37.894 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 19:19:37.894 (7m8.128s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 19:19:37.894 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 19:19:37.895 Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-vf6r6 to bootstrap-e2e-minion-group-6j12 Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.433613505s (1.433635054s including waiting) Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Readiness probe failed: Get "http://10.64.2.5:8181/ready": dial tcp 10.64.2.5:8181: connect: connection refused Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-vf6r6_kube-system(0cea2a5c-3519-4b06-a172-87a74da427cd) Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Readiness probe failed: Get "http://10.64.2.13:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Readiness probe failed: Get "http://10.64.2.13:8181/ready": dial tcp 10.64.2.13:8181: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Liveness probe failed: Get "http://10.64.2.13:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-vf6r6 Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-vf6r6 Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-xqdgk to bootstrap-e2e-minion-group-kbdq Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 990.09151ms (990.109933ms including waiting) Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "http://10.64.3.3:8181/ready": dial tcp 10.64.3.3:8181: connect: connection refused Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "http://10.64.3.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "http://10.64.3.8:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Container coredns failed liveness probe, will be restarted Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-xqdgk Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "http://10.64.3.17:8181/ready": dial tcp 10.64.3.17:8181: connect: connection refused Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-vf6r6 Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-xqdgk Jan 29 19:19:37.955: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 19:19:37.955: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 19:19:37.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 19:19:37.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 19:19:37.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 19:19:37.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 19:19:37.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 19:19:37.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 19:19:37.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 19:19:37.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 19:19:37.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 19:19:37.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3683c became leader Jan 29 19:19:37.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_67c56 became leader Jan 29 19:19:37.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_df769 became leader Jan 29 19:19:37.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_28596 became leader Jan 29 19:19:37.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_8fdc7 became leader Jan 29 19:19:37.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_eda6e became leader Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-2vqtg to bootstrap-e2e-minion-group-6j12 Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 954.093152ms (954.103201ms including waiting) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-2vqtg_kube-system(9b972156-4678-407b-bae6-cbb0320f2268) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-2vqtg_kube-system(9b972156-4678-407b-bae6-cbb0320f2268) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Liveness probe failed: Get "http://10.64.2.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Failed: Error: failed to get sandbox container task: no running task found: task ce359821e83a420e36dfe37b2ccf490dd7b434c6387199aa880e2a31a15f9761 not found: not found Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-86td2 to bootstrap-e2e-minion-group-zmlw Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 898.19014ms (898.205304ms including waiting) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-86td2_kube-system(69719ba2-5e8c-4fb5-851f-01aacdebb1fe) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-86td2_kube-system(69719ba2-5e8c-4fb5-851f-01aacdebb1fe) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Unhealthy: Liveness probe failed: Get "http://10.64.1.11:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Unhealthy: Liveness probe failed: Get "http://10.64.1.13:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Failed: Error: failed to get sandbox container task: no running task found: task e3bdcea50768017e3097570b0a7fd8f8b7d08ec4f9f0844f58f51996a1b259ed not found: not found Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-sl29q to bootstrap-e2e-minion-group-kbdq Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 634.905196ms (634.917128ms including waiting) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "http://10.64.3.9:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-sl29q_kube-system(85b21872-2276-4a8c-b663-a6787440ee59) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "http://10.64.3.11:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-2vqtg Jan 29 19:19:37.955: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-86td2 Jan 29 19:19:37.955: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-sl29q Jan 29 19:19:37.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 19:19:37.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 19:19:37.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 19:19:37.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 19:19:37.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 19:19:37.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 19:19:37.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 19:19:37.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 19:19:37.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 19:19:37.955: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 19:19:37.955: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 19:19:37.955: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 19:19:37.955: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 19:19:37.955: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-apiserver in pod kube-apiserver-bootstrap-e2e-master_kube-system(bb9539f6145547e44e6540e67cf542b1) Jan 29 19:19:37.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 19:19:37.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 19:19:37.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 19:19:37.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 19:19:37.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_950167c8-36b9-42df-8a85-3a9d28c53b4d became leader Jan 29 19:19:37.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8362f635-12b0-418d-8264-942880514a9e became leader Jan 29 19:19:37.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_1c918cd0-bdd9-4406-82a9-d0c9fd5f6aa2 became leader Jan 29 19:19:37.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_e8006dea-56a4-4ae5-8fe4-7691ecdbac01 became leader Jan 29 19:19:37.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c5609834-1631-4734-825c-ab0ef0ba6696 became leader Jan 29 19:19:37.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_5cc95c90-cfe0-4ac9-b5ea-8c5338867cbd became leader Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-sqslx to bootstrap-e2e-minion-group-6j12 Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.376080383s (1.376088044s including waiting) Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container autoscaler Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container autoscaler Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container autoscaler Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container autoscaler Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container autoscaler Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-sqslx_kube-system(e0911a50-61bc-4e97-9427-cf2d00a53fcc) Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-sqslx Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-sqslx Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container autoscaler Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container autoscaler Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-sqslx Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6j12_kube-system(4b09de720b01bf61ad28571efe2a195a) Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6j12_kube-system(4b09de720b01bf61ad28571efe2a195a) Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-kbdq_kube-system(61d71385284b43d8d86322a53815ff12) Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-zmlw_kube-system(f79ee35ecf1fb040fbeb5b8a84a1dcae) Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-zmlw_kube-system(f79ee35ecf1fb040fbeb5b8a84a1dcae) Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 19:19:37.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 19:19:37.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 29 19:19:37.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 19:19:37.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 19:19:37.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_fc11fe53-5cf0-4193-a2bb-e6c9362442ab became leader Jan 29 19:19:37.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_01571e77-c85b-4452-a422-92094f674352 became leader Jan 29 19:19:37.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_3563e5ef-6b74-4b5f-aaae-be9535c8b370 became leader Jan 29 19:19:37.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_b7b7f0d1-f60a-4a81-b65f-f63f2e050806 became leader Jan 29 19:19:37.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_7416f78d-b81a-4fde-9093-a1e9875aad37 became leader Jan 29 19:19:37.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_29f719d9-3d11-4dd5-89e1-51aecacbbac6 became leader Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-ch8vf to bootstrap-e2e-minion-group-zmlw Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 524.239661ms (524.253716ms including waiting) Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container default-http-backend Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container default-http-backend Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container default-http-backend Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container default-http-backend Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Unhealthy: Liveness probe failed: Get "http://10.64.1.9:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container default-http-backend Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-ch8vf Jan 29 19:19:37.955: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 19:19:37.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 19:19:37.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 19:19:37.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 19:19:37.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 19:19:37.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-69vb9 to bootstrap-e2e-minion-group-6j12 Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 819.373409ms (819.391143ms including waiting) Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.793896744s (1.793906041s including waiting) Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bsd85 to bootstrap-e2e-master Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 733.670101ms (733.681792ms including waiting) Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.802128586s (1.802140747s including waiting) Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-k4wx2 to bootstrap-e2e-minion-group-zmlw Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 714.06682ms (714.080021ms including waiting) Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.785588602s (1.785596591s including waiting) Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-sxj7d to bootstrap-e2e-minion-group-kbdq Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 714.105616ms (714.11794ms including waiting) Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.882455818s (1.882464632s including waiting) Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bsd85 Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-k4wx2 Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-69vb9 Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-sxj7d Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-57s7b to bootstrap-e2e-minion-group-6j12 Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.867162036s (1.867179734s including waiting) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.143065018s (1.143075491s including waiting) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-57s7b Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-57s7b Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-rbv42 to bootstrap-e2e-minion-group-kbdq Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.329430274s (1.329453807s including waiting) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 999.838364ms (999.850042ms including waiting) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.4:10250/readyz": dial tcp 10.64.3.4:10250: connect: connection refused Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "https://10.64.3.4:10250/livez": dial tcp 10.64.3.4:10250: connect: connection refused Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "https://10.64.3.4:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Container metrics-server failed liveness probe, will be restarted Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Failed: Error: failed to get sandbox container task: no running task found: task 9b8fcc9e9e402a3c97e0f4aec77203618c2c01ccfd4d4d09a7ae88ba7b697e9a not found: not found Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.10:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.10:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "https://10.64.3.10:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Container metrics-server failed liveness probe, will be restarted Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.10:10250/readyz": context deadline exceeded Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-rbv42_kube-system(692fae41-4cdd-4a87-8903-78ba3c7a5848) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-rbv42 Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-rbv42 Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-zmlw Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 1.399668429s (1.399675942s including waiting) Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container volume-snapshot-controller Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container volume-snapshot-controller Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container volume-snapshot-controller Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(998e9588-4f8a-4c36-bffc-169b133e589e) Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container volume-snapshot-controller Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container volume-snapshot-controller Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container volume-snapshot-controller Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(998e9588-4f8a-4c36-bffc-169b133e589e) Jan 29 19:19:37.956: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.956: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 19:19:37.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container volume-snapshot-controller Jan 29 19:19:37.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container volume-snapshot-controller Jan 29 19:19:37.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container volume-snapshot-controller Jan 29 19:19:37.956: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 19:19:37.956 (61ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 19:19:37.956 Jan 29 19:19:37.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 19:19:38.001 (45ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 19:19:38.001 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 19:19:38.001 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 19:19:38.001 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 19:19:38.001 STEP: Collecting events from namespace "reboot-6953". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 19:19:38.001 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 19:19:38.042 Jan 29 19:19:38.083: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 19:19:38.083: INFO: Jan 29 19:19:38.128: INFO: Logging node info for node bootstrap-e2e-master Jan 29 19:19:38.170: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 6d594531-bf60-4169-a952-1435da6f1f19 2476 0 2023-01-29 18:58:01 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 19:18:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 19:18:44 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 19:18:44 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 19:18:44 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 19:18:44 +0000 UTC,LastTransitionTime:2023-01-29 18:58:02 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.227.160.185,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:715ad78430040f7d6ba514abe5aaad49,SystemUUID:715ad784-3004-0f7d-6ba5-14abe5aaad49,BootID:68c04943-fcd4-4db6-91f3-becf325d9eb5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:19:38.171: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 19:19:38.217: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 19:19:38.293: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 18:57:17 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.293: INFO: Container kube-controller-manager ready: true, restart count 7 Jan 29 19:19:38.293: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 18:57:17 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.293: INFO: Container etcd-container ready: true, restart count 1 Jan 29 19:19:38.293: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 18:57:17 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.293: INFO: Container etcd-container ready: true, restart count 2 Jan 29 19:19:38.293: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 18:57:17 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.293: INFO: Container kube-apiserver ready: true, restart count 3 Jan 29 19:19:38.293: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 18:57:34 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.293: INFO: Container kube-addon-manager ready: true, restart count 2 Jan 29 19:19:38.293: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 18:57:34 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.293: INFO: Container l7-lb-controller ready: true, restart count 7 Jan 29 19:19:38.293: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 18:57:17 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.293: INFO: Container kube-scheduler ready: false, restart count 5 Jan 29 19:19:38.293: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 18:57:17 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.293: INFO: Container konnectivity-server-container ready: true, restart count 2 Jan 29 19:19:38.293: INFO: metadata-proxy-v0.1-bsd85 started at 2023-01-29 18:58:01 +0000 UTC (0+2 container statuses recorded) Jan 29 19:19:38.293: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 19:19:38.293: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 19:19:38.471: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 19:19:38.471: INFO: Logging node info for node bootstrap-e2e-minion-group-6j12 Jan 29 19:19:38.513: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6j12 ab88abcc-a824-4e7b-91d9-e5b55ca7b07b 2580 0 2023-01-29 18:58:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6j12 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 19:13:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 19:15:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 19:19:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 19:19:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-minion-group-6j12,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:23 +0000 UTC,LastTransitionTime:2023-01-29 19:19:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:23 +0000 UTC,LastTransitionTime:2023-01-29 19:19:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:23 +0000 UTC,LastTransitionTime:2023-01-29 19:19:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 19:19:23 +0000 UTC,LastTransitionTime:2023-01-29 19:19:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.40.177,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6j12.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6j12.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:533e501db67cda40a67ec8f66182930e,SystemUUID:533e501d-b67c-da40-a67e-c8f66182930e,BootID:2fabb178-4b4e-4a6c-9089-f906d84a1938,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:19:38.513: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6j12 Jan 29 19:19:38.559: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6j12 Jan 29 19:19:38.623: INFO: kube-dns-autoscaler-5f6455f985-sqslx started at 2023-01-29 18:58:18 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.623: INFO: Container autoscaler ready: true, restart count 3 Jan 29 19:19:38.623: INFO: coredns-6846b5b5f-vf6r6 started at 2023-01-29 18:58:18 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.623: INFO: Container coredns ready: true, restart count 7 Jan 29 19:19:38.623: INFO: metadata-proxy-v0.1-69vb9 started at 2023-01-29 18:58:06 +0000 UTC (0+2 container statuses recorded) Jan 29 19:19:38.623: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 19:19:38.623: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 19:19:38.623: INFO: konnectivity-agent-2vqtg started at 2023-01-29 18:58:18 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.623: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 29 19:19:38.623: INFO: kube-proxy-bootstrap-e2e-minion-group-6j12 started at 2023-01-29 18:58:05 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.623: INFO: Container kube-proxy ready: true, restart count 6 Jan 29 19:19:38.803: INFO: Latency metrics for node bootstrap-e2e-minion-group-6j12 Jan 29 19:19:38.803: INFO: Logging node info for node bootstrap-e2e-minion-group-kbdq Jan 29 19:19:38.846: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-kbdq c88d547b-ac1b-48a3-9f38-f761a4792a9d 2537 0 2023-01-29 18:58:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-kbdq kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 19:13:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 19:15:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 19:19:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 19:19:23 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-minion-group-kbdq,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:23 +0000 UTC,LastTransitionTime:2023-01-29 19:19:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:23 +0000 UTC,LastTransitionTime:2023-01-29 19:19:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:23 +0000 UTC,LastTransitionTime:2023-01-29 19:19:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 19:19:23 +0000 UTC,LastTransitionTime:2023-01-29 19:19:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.183.142,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-kbdq.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-kbdq.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f85a1ba151054485449fa0d667f3e53e,SystemUUID:f85a1ba1-5105-4485-449f-a0d667f3e53e,BootID:a74596f7-0e7e-4274-8a05-ac891407debe,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:19:38.846: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-kbdq Jan 29 19:19:38.892: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-kbdq Jan 29 19:19:38.959: INFO: metadata-proxy-v0.1-sxj7d started at 2023-01-29 18:58:07 +0000 UTC (0+2 container statuses recorded) Jan 29 19:19:38.959: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 19:19:38.959: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 19:19:38.959: INFO: konnectivity-agent-sl29q started at 2023-01-29 18:58:18 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.959: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 29 19:19:38.959: INFO: coredns-6846b5b5f-xqdgk started at 2023-01-29 18:58:22 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.959: INFO: Container coredns ready: true, restart count 4 Jan 29 19:19:38.959: INFO: metrics-server-v0.5.2-867b8754b9-rbv42 started at 2023-01-29 18:58:31 +0000 UTC (0+2 container statuses recorded) Jan 29 19:19:38.959: INFO: Container metrics-server ready: false, restart count 7 Jan 29 19:19:38.959: INFO: Container metrics-server-nanny ready: false, restart count 6 Jan 29 19:19:38.959: INFO: kube-proxy-bootstrap-e2e-minion-group-kbdq started at 2023-01-29 18:58:06 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.959: INFO: Container kube-proxy ready: true, restart count 4 Jan 29 19:19:39.128: INFO: Latency metrics for node bootstrap-e2e-minion-group-kbdq Jan 29 19:19:39.128: INFO: Logging node info for node bootstrap-e2e-minion-group-zmlw Jan 29 19:19:39.180: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-zmlw e228bd00-93a0-454f-b62d-2a81447198ac 2607 0 2023-01-29 18:58:02 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-zmlw kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 19:13:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 19:14:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 19:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 19:19:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-minion-group-zmlw,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 19:14:53 +0000 UTC,LastTransitionTime:2023-01-29 19:14:52 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 19:14:53 +0000 UTC,LastTransitionTime:2023-01-29 19:14:52 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 19:14:53 +0000 UTC,LastTransitionTime:2023-01-29 19:14:52 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 19:14:53 +0000 UTC,LastTransitionTime:2023-01-29 19:14:52 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 19:14:53 +0000 UTC,LastTransitionTime:2023-01-29 19:14:52 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 19:14:53 +0000 UTC,LastTransitionTime:2023-01-29 19:14:52 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 19:14:53 +0000 UTC,LastTransitionTime:2023-01-29 19:14:52 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:33 +0000 UTC,LastTransitionTime:2023-01-29 19:19:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:33 +0000 UTC,LastTransitionTime:2023-01-29 19:19:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:33 +0000 UTC,LastTransitionTime:2023-01-29 19:19:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 19:19:33 +0000 UTC,LastTransitionTime:2023-01-29 19:19:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.185.251.137,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-zmlw.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-zmlw.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:edebb6d4adaefd8f58c1a37613cc5a13,SystemUUID:edebb6d4-adae-fd8f-58c1-a37613cc5a13,BootID:1e777854-9a8e-44b3-9035-d54a4da76007,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:19:39.181: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-zmlw Jan 29 19:19:39.233: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-zmlw Jan 29 19:19:39.301: INFO: kube-proxy-bootstrap-e2e-minion-group-zmlw started at 2023-01-29 18:58:02 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:39.301: INFO: Container kube-proxy ready: true, restart count 8 Jan 29 19:19:39.301: INFO: l7-default-backend-8549d69d99-ch8vf started at 2023-01-29 18:58:18 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:39.301: INFO: Container default-http-backend ready: false, restart count 3 Jan 29 19:19:39.301: INFO: volume-snapshot-controller-0 started at 2023-01-29 18:58:18 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:39.301: INFO: Container volume-snapshot-controller ready: true, restart count 9 Jan 29 19:19:39.301: INFO: metadata-proxy-v0.1-k4wx2 started at 2023-01-29 18:58:03 +0000 UTC (0+2 container statuses recorded) Jan 29 19:19:39.301: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 19:19:39.301: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 19:19:39.301: INFO: konnectivity-agent-86td2 started at 2023-01-29 18:58:18 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:39.301: INFO: Container konnectivity-agent ready: false, restart count 7 Jan 29 19:20:14.794: INFO: Latency metrics for node bootstrap-e2e-minion-group-zmlw END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 19:20:14.794 (36.793s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 19:20:14.794 (36.794s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 19:20:14.794 STEP: Destroying namespace "reboot-6953" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 19:20:14.794 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 19:20:14.839 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 19:20:14.839 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 19:20:14.839 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 19:19:37.894from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 19:12:29.48 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 19:12:29.48 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 19:12:29.48 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 19:12:29.48 Jan 29 19:12:29.480: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 19:12:29.481 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 19:12:29.606 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 19:12:29.686 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 19:12:29.766 (287ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 19:12:29.766 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 19:12:29.766 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 19:12:29.766 Jan 29 19:12:29.912: INFO: Getting bootstrap-e2e-minion-group-zmlw Jan 29 19:12:29.912: INFO: Getting bootstrap-e2e-minion-group-kbdq Jan 29 19:12:29.912: INFO: Getting bootstrap-e2e-minion-group-6j12 Jan 29 19:12:29.955: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-6j12 condition Ready to be true Jan 29 19:12:29.955: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-kbdq condition Ready to be true Jan 29 19:12:29.955: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-zmlw condition Ready to be true Jan 29 19:12:29.998: INFO: Node bootstrap-e2e-minion-group-kbdq has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 19:12:29.998: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 19:12:29.998: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-sxj7d" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:12:29.998: INFO: Node bootstrap-e2e-minion-group-6j12 has 3 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12 metadata-proxy-v0.1-69vb9] Jan 29 19:12:29.998: INFO: Waiting up to 5m0s for 3 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12 metadata-proxy-v0.1-69vb9] Jan 29 19:12:29.998: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-69vb9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:12:29.998: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-kbdq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:12:29.999: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-sqslx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:12:29.999: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-6j12" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:12:29.999: INFO: Node bootstrap-e2e-minion-group-zmlw has 3 assigned pods with no liveness probes: [metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0 kube-proxy-bootstrap-e2e-minion-group-zmlw] Jan 29 19:12:29.999: INFO: Waiting up to 5m0s for 3 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0 kube-proxy-bootstrap-e2e-minion-group-zmlw] Jan 29 19:12:29.999: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-zmlw" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:12:29.999: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-k4wx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:12:29.999: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:12:30.042: INFO: Pod "metadata-proxy-v0.1-sxj7d": Phase="Running", Reason="", readiness=true. Elapsed: 43.330371ms Jan 29 19:12:30.042: INFO: Pod "metadata-proxy-v0.1-sxj7d" satisfied condition "running and ready, or succeeded" Jan 29 19:12:30.045: INFO: Pod "kube-dns-autoscaler-5f6455f985-sqslx": Phase="Running", Reason="", readiness=true. Elapsed: 46.591724ms Jan 29 19:12:30.045: INFO: Pod "kube-dns-autoscaler-5f6455f985-sqslx" satisfied condition "running and ready, or succeeded" Jan 29 19:12:30.045: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.291521ms Jan 29 19:12:30.045: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:30.045: INFO: Pod "metadata-proxy-v0.1-69vb9": Phase="Running", Reason="", readiness=true. Elapsed: 46.837808ms Jan 29 19:12:30.045: INFO: Pod "metadata-proxy-v0.1-69vb9" satisfied condition "running and ready, or succeeded" Jan 29 19:12:30.048: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kbdq": Phase="Running", Reason="", readiness=true. Elapsed: 49.04646ms Jan 29 19:12:30.048: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kbdq" satisfied condition "running and ready, or succeeded" Jan 29 19:12:30.048: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 19:12:30.048: INFO: Getting external IP address for bootstrap-e2e-minion-group-kbdq Jan 29 19:12:30.048: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-kbdq(34.168.183.142:22) Jan 29 19:12:30.048: INFO: Pod "metadata-proxy-v0.1-k4wx2": Phase="Running", Reason="", readiness=true. Elapsed: 48.795567ms Jan 29 19:12:30.048: INFO: Pod "metadata-proxy-v0.1-k4wx2" satisfied condition "running and ready, or succeeded" Jan 29 19:12:30.048: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=false. Elapsed: 48.956074ms Jan 29 19:12:30.048: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-zmlw' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:52 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:52 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC }] Jan 29 19:12:30.048: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6j12": Phase="Running", Reason="", readiness=true. Elapsed: 49.407798ms Jan 29 19:12:30.048: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6j12" satisfied condition "running and ready, or succeeded" Jan 29 19:12:30.048: INFO: Wanted all 3 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12 metadata-proxy-v0.1-69vb9] Jan 29 19:12:30.048: INFO: Getting external IP address for bootstrap-e2e-minion-group-6j12 Jan 29 19:12:30.048: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-6j12(34.82.40.177:22) Jan 29 19:12:30.564: INFO: ssh prow@34.168.183.142:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 19:12:30.564: INFO: ssh prow@34.168.183.142:22: stdout: "" Jan 29 19:12:30.564: INFO: ssh prow@34.168.183.142:22: stderr: "" Jan 29 19:12:30.564: INFO: ssh prow@34.168.183.142:22: exit code: 0 Jan 29 19:12:30.564: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-kbdq condition Ready to be false Jan 29 19:12:30.567: INFO: ssh prow@34.82.40.177:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 19:12:30.567: INFO: ssh prow@34.82.40.177:22: stdout: "" Jan 29 19:12:30.567: INFO: ssh prow@34.82.40.177:22: stderr: "" Jan 29 19:12:30.567: INFO: ssh prow@34.82.40.177:22: exit code: 0 Jan 29 19:12:30.567: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-6j12 condition Ready to be false Jan 29 19:12:30.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:30.609: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:32.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.088243397s Jan 29 19:12:32.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:32.091: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=false. Elapsed: 2.091848596s Jan 29 19:12:32.091: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-zmlw' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:52 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:52 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC }] Jan 29 19:12:32.649: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:32.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:34.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.088146997s Jan 29 19:12:34.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:34.090: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=false. Elapsed: 4.090891999s Jan 29 19:12:34.090: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-zmlw' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:52 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:52 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC }] Jan 29 19:12:34.692: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:34.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:36.108: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.109078768s Jan 29 19:12:36.108: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:36.109: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=true. Elapsed: 6.110327013s Jan 29 19:12:36.109: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw" satisfied condition "running and ready, or succeeded" Jan 29 19:12:36.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:36.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:38.090: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.091511588s Jan 29 19:12:38.090: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:38.778: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:38.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:40.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.088199939s Jan 29 19:12:40.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:40.823: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:40.826: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:42.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.08851299s Jan 29 19:12:42.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:42.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:42.869: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:44.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.087901123s Jan 29 19:12:44.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:44.909: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:44.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:46.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.087971624s Jan 29 19:12:46.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:46.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:46.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:48.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.087964908s Jan 29 19:12:48.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:48.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:48.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:50.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.088087104s Jan 29 19:12:50.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:51.042: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:51.043: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:52.088: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.089426003s Jan 29 19:12:52.088: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:53.085: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:53.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:54.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.088090621s Jan 29 19:12:54.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:55.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:55.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:56.088: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.088691305s Jan 29 19:12:56.088: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:57.175: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:57.176: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:58.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.088463773s Jan 29 19:12:58.087: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:11:28 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:12:59.226: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:12:59.226: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:00.087: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 30.088221202s Jan 29 19:13:00.087: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 19:13:00.087: INFO: Wanted all 3 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0 kube-proxy-bootstrap-e2e-minion-group-zmlw] Jan 29 19:13:00.087: INFO: Getting external IP address for bootstrap-e2e-minion-group-zmlw Jan 29 19:13:00.087: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-zmlw(35.185.251.137:22) Jan 29 19:13:00.599: INFO: ssh prow@35.185.251.137:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 19:13:00.599: INFO: ssh prow@35.185.251.137:22: stdout: "" Jan 29 19:13:00.599: INFO: ssh prow@35.185.251.137:22: stderr: "" Jan 29 19:13:00.599: INFO: ssh prow@35.185.251.137:22: exit code: 0 Jan 29 19:13:00.599: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-zmlw condition Ready to be false Jan 29 19:13:00.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:01.270: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:01.270: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:02.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:03.313: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:03.313: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:04.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:05.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:05.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:06.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:07.401: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:07.401: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:08.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:09.450: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:09.450: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:10.869: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:11.494: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:11.494: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:12.927: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:13.538: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:13.538: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:14.970: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:15.582: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:15.582: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:17.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:17.627: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-6j12 condition Ready to be true Jan 29 19:13:17.627: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:17.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:13:19.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:19.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:19.713: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:13:21.098: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:21.713: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:21.756: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:13:23.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:23.757: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-kbdq condition Ready to be true Jan 29 19:13:23.799: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:23.799: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:13:25.183: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:25.844: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:13:25.844: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:27.226: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:27.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:13:27.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:29.269: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:29.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:13:29.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:31.312: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:31.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:13:31.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:33.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:34.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:34.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:13:35.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:36.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:13:36.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:37.474: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:38.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:13:38.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:39.517: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:40.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:40.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:13:41.561: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:13:42.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:13:42.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:13:43.602: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:13:44.235: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:13:44.235: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:13:45.642: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:13:46.275: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:13:46.275: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:13:47.682: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:13:48.315: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:13:48.315: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:13:49.722: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:13:50.355: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:13:50.355: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:13:51.762: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:13:52.395: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:13:52.395: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:13:53.803: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:13:54.434: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:13:54.435: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:13:55.843: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:13:56.475: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:13:56.475: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:13:57.884: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:13:58.514: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:13:58.514: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:13:59.924: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:00.554: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:00.555: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:01.965: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:02.595: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:02.595: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:04.006: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:04.635: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:04.635: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:06.047: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:06.676: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:06.676: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:08.087: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:08.716: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:08.716: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:10.127: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:10.756: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:10.756: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:12.167: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:12.796: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:12.796: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:14.207: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:14.836: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:14.836: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:16.247: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:16.876: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:16.876: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:18.287: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:18.915: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:18.915: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:20.328: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:20.955: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:20.955: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:22.369: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:22.995: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:22.995: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:24.409: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:25.035: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:25.035: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:26.449: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:27.075: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:27.075: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:28.489: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:29.115: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:29.115: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:30.529: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:31.156: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:31.156: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:32.569: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:33.196: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:33.196: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:34.609: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:35.236: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:35.236: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:36.649: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:37.276: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:37.276: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:38.689: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:39.315: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:39.315: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:40.731: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:41.356: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:14:41.356: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:14:42.771: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:14:48.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:14:48.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:14:48.232: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-zmlw condition Ready to be true Jan 29 19:14:48.550: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:14:50.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:14:50.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:14:50.594: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:14:52.328: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:14:52.328: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:14:52.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:14:54.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:14:54.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:14:54.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:14:56.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:14:56.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:14:56.724: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:14:58.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:14:58.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:14:58.766: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:00.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:00.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:00.810: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:02.555: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:02.555: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:02.853: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:04.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:04.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:04.896: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:06.644: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:06.644: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:06.939: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:08.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:08.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:08.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:10.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:10.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:11.028: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:12.794: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:12.795: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:13.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:14.839: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:14.839: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:15.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:16.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:16.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:17.157: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:18.928: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:18.928: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:19.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:20.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:20.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:21.243: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:23.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:23.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:23.286: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:25.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:25.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:25.330: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:27.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:27.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:27.374: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:29.155: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:29.155: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:29.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:31.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:31.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:31.459: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:33.246: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:33.246: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:33.503: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:35.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:35.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:35.546: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:37.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:37.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:37.590: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:39.381: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:39.382: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:39.634: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:41.424: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:41.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:41.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:43.468: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:43.469: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:43.720: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:45.510: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:45.512: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:45.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:47.554: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:47.557: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:47.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:49.596: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:49.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:49.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:51.640: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:51.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:51.893: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:53.683: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:53.685: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:53.936: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:55.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:55.743: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:55.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:57.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:15:57.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:58.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:15:59.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:15:59.831: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:00.066: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:01.873: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:01.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:02.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:03.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:03.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:04.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:05.965: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:05.965: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:06.197: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:08.011: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:08.011: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:08.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:10.055: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:10.055: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:10.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:12.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:12.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:12.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:14.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:14.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:14.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:16.191: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:16.191: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:16.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:18.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:17 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:22 +0000 UTC}]. Failure Jan 29 19:16:18.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:16:18.458: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:16:20.275: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:20.275: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:20.498: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:22.315: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:22.315: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:22.538: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:24.356: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:24.356: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:24.579: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:26.395: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:26.395: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:26.619: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:28.435: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:28.435: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:28.660: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:30.477: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:30.477: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:30.700: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:32.517: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:32.517: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:32.740: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:34.558: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:34.558: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:34.781: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:36.598: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:36.598: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:36.821: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:38.638: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:38.638: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:38.861: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:40.678: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:40.678: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:40.901: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:42.718: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:42.718: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:42.941: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:44.758: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:44.758: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:44.981: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:46.798: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:46.798: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:47.021: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:48.838: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:48.838: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:49.061: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:50.878: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:50.878: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:51.101: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:52.919: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:52.919: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:53.141: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:54.960: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:54.960: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:55.181: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:57.001: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:57.001: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:57.223: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:16:59.041: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:16:59.041: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:16:59.263: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:01.081: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:01.081: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:01.304: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:03.121: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:03.121: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:03.344: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:05.161: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:05.161: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:05.385: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:07.201: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:07.201: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:07.425: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:09.241: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:09.241: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:09.465: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:11.281: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:11.281: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:11.506: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:13.321: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:13.321: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:13.546: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:15.362: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:15.362: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:15.587: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:17.402: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:17.402: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:17.627: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:19.443: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:19.443: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:19.667: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:21.484: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:21.484: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:21.708: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:23.524: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:23.524: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:23.748: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:25.564: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:25.564: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:25.788: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:27.603: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:27.603: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:27.828: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:29.644: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:29.644: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 5m0.287s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 5m0.001s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 8550 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc0014320c0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fb0e5d081b8?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fb0e5d081b8?, 0xc000e91600}, {0x8147108?, 0xc0015b4820}, {0x7813648, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7fb0e5d081b8?, 0xc000e91600?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc000e91600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 8553 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b483c0, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b483c0, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 goroutine 8554 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b48d00, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b48d00, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x2) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 goroutine 8552 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc00100df80, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc00100df80, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 19:17:29.868: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:31.683: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:31.683: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:31.908: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:33.723: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:33.723: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:33.949: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:35.764: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:35.764: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:35.989: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:37.805: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:37.805: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:38.030: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:39.845: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:39.845: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:40.069: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:41.884: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:41.884: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:42.109: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:43.925: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:43.925: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:44.150: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:45.965: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:45.965: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:46.191: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:48.005: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:48.005: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:48.231: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 5m20.29s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 5m20.003s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 8550 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc0014320c0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fb0e5d081b8?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fb0e5d081b8?, 0xc000e91600}, {0x8147108?, 0xc0015b4820}, {0x7813648, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7fb0e5d081b8?, 0xc000e91600?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc000e91600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 8553 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b483c0, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b483c0, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 goroutine 8554 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b48d00, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b48d00, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x2) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 goroutine 8552 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc00100df80, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc00100df80, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 19:17:50.045: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:50.045: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:50.271: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:52.086: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:52.086: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:52.311: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:54.126: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:54.126: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:54.351: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:56.167: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:56.167: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:56.391: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:17:58.207: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:17:58.207: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:17:58.431: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:00.248: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:00.248: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:00.471: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:02.289: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:02.289: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:02.511: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:04.331: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:04.331: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:04.552: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:06.370: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:06.371: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:06.592: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:08.410: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:08.410: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:08.632: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 5m40.292s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 5m40.005s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 8550 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0014320c0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fb0e5d081b8?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fb0e5d081b8?, 0xc000e91600}, {0x8147108?, 0xc0015b4820}, {0x7813648, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7fb0e5d081b8?, 0xc000e91600?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc000e91600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 8553 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b483c0, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b483c0, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 goroutine 8554 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b48d00, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b48d00, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x2) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 goroutine 8552 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc00100df80, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc00100df80, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 19:18:10.450: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:10.450: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:10.673: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:12.492: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:12.492: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:12.713: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:14.532: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:14.532: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:14.753: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:16.572: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:16.572: INFO: Couldn't get node bootstrap-e2e-minion-group-6j12 Jan 29 19:18:16.793: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:18.572: INFO: Node bootstrap-e2e-minion-group-6j12 didn't reach desired Ready condition status (true) within 5m0s Jan 29 19:18:18.612: INFO: Couldn't get node bootstrap-e2e-minion-group-kbdq Jan 29 19:18:18.833: INFO: Couldn't get node bootstrap-e2e-minion-group-zmlw Jan 29 19:18:24.311: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:24.311: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 19:13:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 19:13:32 +0000 UTC}]. Failure Jan 29 19:18:26.312: INFO: Node bootstrap-e2e-minion-group-kbdq didn't reach desired Ready condition status (true) within 5m0s Jan 29 19:18:26.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:28.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 6m0.293s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 6m0.006s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 8550 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0014320c0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fb0e5d081b8?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fb0e5d081b8?, 0xc000e91600}, {0x8147108?, 0xc0015b4820}, {0x7813648, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7fb0e5d081b8?, 0xc000e91600?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc000e91600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 8554 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b48d00, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b48d00, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x2) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 19:18:30.441: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:32.485: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:34.527: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:36.571: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:38.613: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:40.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:42.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:44.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:46.788: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:48.831: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 6m20.295s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 6m20.008s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 8550 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0014320c0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fb0e5d081b8?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fb0e5d081b8?, 0xc000e91600}, {0x8147108?, 0xc0015b4820}, {0x7813648, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7fb0e5d081b8?, 0xc000e91600?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc000e91600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 8554 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b48d00, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b48d00, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x2) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 19:18:50.876: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:52.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:54.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:57.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:18:59.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:01.092: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:03.135: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:05.178: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:07.221: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:09.264: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 6m40.296s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 6m40.009s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 8550 [semacquire, 7 minutes] sync.runtime_Semacquire(0xc0014320c0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fb0e5d081b8?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fb0e5d081b8?, 0xc000e91600}, {0x8147108?, 0xc0015b4820}, {0x7813648, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7fb0e5d081b8?, 0xc000e91600?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc000e91600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 8554 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b48d00, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b48d00, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x2) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 19:19:11.306: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:13.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:15.392: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:17.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:19.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:21.522: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:23.564: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:25.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:27.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:29.694: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 7m0.298s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 7m0.011s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 8550 [semacquire, 7 minutes] sync.runtime_Semacquire(0xc0014320c0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fb0e5d081b8?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fb0e5d081b8?, 0xc000e91600}, {0x8147108?, 0xc0015b4820}, {0x7813648, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7fb0e5d081b8?, 0xc000e91600?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc000e91600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 8554 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0xc000b48d00, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fb0e5d081b8, 0xc000e91600}, {0x8147108, 0xc0015b4820}, {0x7fff4010c5ee, 0x3}, {0xc000b48d00, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x2) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 19:19:31.749: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 19:19:33.800: INFO: Waiting up to 5m0s for 3 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0 kube-proxy-bootstrap-e2e-minion-group-zmlw] Jan 29 19:19:33.800: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-zmlw" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:19:33.800: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-k4wx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:19:33.800: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:19:33.847: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 47.271481ms Jan 29 19:19:33.847: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:13:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:13:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:19:33.848: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=false. Elapsed: 48.583406ms Jan 29 19:19:33.848: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-zmlw' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:13:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:12:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC }] Jan 29 19:19:33.849: INFO: Pod "metadata-proxy-v0.1-k4wx2": Phase="Running", Reason="", readiness=false. Elapsed: 48.661602ms Jan 29 19:19:33.849: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-k4wx2' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:13:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:05:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:03 +0000 UTC }] Jan 29 19:19:35.894: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.093737715s Jan 29 19:19:35.894: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:13:42 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:19:33 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:18 +0000 UTC }] Jan 29 19:19:35.895: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=false. Elapsed: 2.095024279s Jan 29 19:19:35.895: INFO: Pod "metadata-proxy-v0.1-k4wx2": Phase="Running", Reason="", readiness=false. Elapsed: 2.094964471s Jan 29 19:19:35.895: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-zmlw' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:13:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:12:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:02 +0000 UTC }] Jan 29 19:19:35.895: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-k4wx2' on 'bootstrap-e2e-minion-group-zmlw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:13:42 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:19:33 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:58:03 +0000 UTC }] Jan 29 19:19:37.891: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 4.090762355s Jan 29 19:19:37.891: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 19:19:37.892: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=true. Elapsed: 4.092197085s Jan 29 19:19:37.892: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw" satisfied condition "running and ready, or succeeded" Jan 29 19:19:37.894: INFO: Pod "metadata-proxy-v0.1-k4wx2": Phase="Running", Reason="", readiness=true. Elapsed: 4.094237701s Jan 29 19:19:37.894: INFO: Pod "metadata-proxy-v0.1-k4wx2" satisfied condition "running and ready, or succeeded" Jan 29 19:19:37.894: INFO: Wanted all 3 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0 kube-proxy-bootstrap-e2e-minion-group-zmlw] Jan 29 19:19:37.894: INFO: Reboot successful on node bootstrap-e2e-minion-group-zmlw Jan 29 19:19:37.894: INFO: Node bootstrap-e2e-minion-group-6j12 failed reboot test. Jan 29 19:19:37.894: INFO: Node bootstrap-e2e-minion-group-kbdq failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 19:19:37.894 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 19:19:37.894 (7m8.128s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 19:19:37.894 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 19:19:37.895 Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-vf6r6 to bootstrap-e2e-minion-group-6j12 Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.433613505s (1.433635054s including waiting) Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Readiness probe failed: Get "http://10.64.2.5:8181/ready": dial tcp 10.64.2.5:8181: connect: connection refused Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-vf6r6_kube-system(0cea2a5c-3519-4b06-a172-87a74da427cd) Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Readiness probe failed: Get "http://10.64.2.13:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Readiness probe failed: Get "http://10.64.2.13:8181/ready": dial tcp 10.64.2.13:8181: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Liveness probe failed: Get "http://10.64.2.13:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-vf6r6 Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-vf6r6 Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-xqdgk to bootstrap-e2e-minion-group-kbdq Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 990.09151ms (990.109933ms including waiting) Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "http://10.64.3.3:8181/ready": dial tcp 10.64.3.3:8181: connect: connection refused Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "http://10.64.3.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "http://10.64.3.8:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Container coredns failed liveness probe, will be restarted Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-xqdgk Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container coredns Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "http://10.64.3.17:8181/ready": dial tcp 10.64.3.17:8181: connect: connection refused Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-vf6r6 Jan 29 19:19:37.955: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-xqdgk Jan 29 19:19:37.955: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 19:19:37.955: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 19:19:37.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 19:19:37.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 19:19:37.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 19:19:37.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 19:19:37.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 19:19:37.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 19:19:37.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 19:19:37.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 19:19:37.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 19:19:37.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3683c became leader Jan 29 19:19:37.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_67c56 became leader Jan 29 19:19:37.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_df769 became leader Jan 29 19:19:37.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_28596 became leader Jan 29 19:19:37.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_8fdc7 became leader Jan 29 19:19:37.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_eda6e became leader Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-2vqtg to bootstrap-e2e-minion-group-6j12 Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 954.093152ms (954.103201ms including waiting) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-2vqtg_kube-system(9b972156-4678-407b-bae6-cbb0320f2268) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-2vqtg_kube-system(9b972156-4678-407b-bae6-cbb0320f2268) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Liveness probe failed: Get "http://10.64.2.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Failed: Error: failed to get sandbox container task: no running task found: task ce359821e83a420e36dfe37b2ccf490dd7b434c6387199aa880e2a31a15f9761 not found: not found Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-86td2 to bootstrap-e2e-minion-group-zmlw Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 898.19014ms (898.205304ms including waiting) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-86td2_kube-system(69719ba2-5e8c-4fb5-851f-01aacdebb1fe) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-86td2_kube-system(69719ba2-5e8c-4fb5-851f-01aacdebb1fe) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Unhealthy: Liveness probe failed: Get "http://10.64.1.11:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Unhealthy: Liveness probe failed: Get "http://10.64.1.13:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Failed: Error: failed to get sandbox container task: no running task found: task e3bdcea50768017e3097570b0a7fd8f8b7d08ec4f9f0844f58f51996a1b259ed not found: not found Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-sl29q to bootstrap-e2e-minion-group-kbdq Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 634.905196ms (634.917128ms including waiting) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "http://10.64.3.9:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-sl29q_kube-system(85b21872-2276-4a8c-b663-a6787440ee59) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "http://10.64.3.11:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container konnectivity-agent Jan 29 19:19:37.955: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-2vqtg Jan 29 19:19:37.955: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-86td2 Jan 29 19:19:37.955: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-sl29q Jan 29 19:19:37.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 19:19:37.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 19:19:37.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 19:19:37.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 19:19:37.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 19:19:37.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 19:19:37.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 19:19:37.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 19:19:37.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 19:19:37.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 19:19:37.955: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 19:19:37.955: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 19:19:37.955: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 19:19:37.955: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 19:19:37.955: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-apiserver in pod kube-apiserver-bootstrap-e2e-master_kube-system(bb9539f6145547e44e6540e67cf542b1) Jan 29 19:19:37.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 19:19:37.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 19:19:37.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 19:19:37.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 19:19:37.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_950167c8-36b9-42df-8a85-3a9d28c53b4d became leader Jan 29 19:19:37.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8362f635-12b0-418d-8264-942880514a9e became leader Jan 29 19:19:37.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_1c918cd0-bdd9-4406-82a9-d0c9fd5f6aa2 became leader Jan 29 19:19:37.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_e8006dea-56a4-4ae5-8fe4-7691ecdbac01 became leader Jan 29 19:19:37.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c5609834-1631-4734-825c-ab0ef0ba6696 became leader Jan 29 19:19:37.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_5cc95c90-cfe0-4ac9-b5ea-8c5338867cbd became leader Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-sqslx to bootstrap-e2e-minion-group-6j12 Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.376080383s (1.376088044s including waiting) Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container autoscaler Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container autoscaler Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container autoscaler Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container autoscaler Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container autoscaler Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-sqslx_kube-system(e0911a50-61bc-4e97-9427-cf2d00a53fcc) Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-sqslx Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-sqslx Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container autoscaler Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container autoscaler Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-sqslx Jan 29 19:19:37.955: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6j12_kube-system(4b09de720b01bf61ad28571efe2a195a) Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6j12_kube-system(4b09de720b01bf61ad28571efe2a195a) Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-kbdq_kube-system(61d71385284b43d8d86322a53815ff12) Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-zmlw_kube-system(f79ee35ecf1fb040fbeb5b8a84a1dcae) Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-zmlw_kube-system(f79ee35ecf1fb040fbeb5b8a84a1dcae) Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container kube-proxy Jan 29 19:19:37.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:19:37.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 19:19:37.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 19:19:37.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 29 19:19:37.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 19:19:37.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 19:19:37.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_fc11fe53-5cf0-4193-a2bb-e6c9362442ab became leader Jan 29 19:19:37.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_01571e77-c85b-4452-a422-92094f674352 became leader Jan 29 19:19:37.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_3563e5ef-6b74-4b5f-aaae-be9535c8b370 became leader Jan 29 19:19:37.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_b7b7f0d1-f60a-4a81-b65f-f63f2e050806 became leader Jan 29 19:19:37.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_7416f78d-b81a-4fde-9093-a1e9875aad37 became leader Jan 29 19:19:37.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_29f719d9-3d11-4dd5-89e1-51aecacbbac6 became leader Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-ch8vf to bootstrap-e2e-minion-group-zmlw Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 524.239661ms (524.253716ms including waiting) Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container default-http-backend Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container default-http-backend Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container default-http-backend Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container default-http-backend Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Unhealthy: Liveness probe failed: Get "http://10.64.1.9:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container default-http-backend Jan 29 19:19:37.955: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-ch8vf Jan 29 19:19:37.955: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 19:19:37.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 19:19:37.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 19:19:37.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 19:19:37.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 19:19:37.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-69vb9 to bootstrap-e2e-minion-group-6j12 Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 819.373409ms (819.391143ms including waiting) Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.793896744s (1.793906041s including waiting) Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bsd85 to bootstrap-e2e-master Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 733.670101ms (733.681792ms including waiting) Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.802128586s (1.802140747s including waiting) Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-k4wx2 to bootstrap-e2e-minion-group-zmlw Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 714.06682ms (714.080021ms including waiting) Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.785588602s (1.785596591s including waiting) Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-sxj7d to bootstrap-e2e-minion-group-kbdq Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 714.105616ms (714.11794ms including waiting) Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.882455818s (1.882464632s including waiting) Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metadata-proxy Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container prometheus-to-sd-exporter Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bsd85 Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-k4wx2 Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-69vb9 Jan 29 19:19:37.955: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-sxj7d Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-57s7b to bootstrap-e2e-minion-group-6j12 Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.867162036s (1.867179734s including waiting) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.143065018s (1.143075491s including waiting) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-57s7b Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-57s7b Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-rbv42 to bootstrap-e2e-minion-group-kbdq Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.329430274s (1.329453807s including waiting) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 999.838364ms (999.850042ms including waiting) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.4:10250/readyz": dial tcp 10.64.3.4:10250: connect: connection refused Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "https://10.64.3.4:10250/livez": dial tcp 10.64.3.4:10250: connect: connection refused Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "https://10.64.3.4:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Container metrics-server failed liveness probe, will be restarted Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Failed: Error: failed to get sandbox container task: no running task found: task 9b8fcc9e9e402a3c97e0f4aec77203618c2c01ccfd4d4d09a7ae88ba7b697e9a not found: not found Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.10:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.10:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "https://10.64.3.10:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Container metrics-server failed liveness probe, will be restarted Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.10:10250/readyz": context deadline exceeded Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-rbv42_kube-system(692fae41-4cdd-4a87-8903-78ba3c7a5848) Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-rbv42 Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server-nanny Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-rbv42 Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 19:19:37.955: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-zmlw Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 1.399668429s (1.399675942s including waiting) Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container volume-snapshot-controller Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container volume-snapshot-controller Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container volume-snapshot-controller Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(998e9588-4f8a-4c36-bffc-169b133e589e) Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container volume-snapshot-controller Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container volume-snapshot-controller Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container volume-snapshot-controller Jan 29 19:19:37.955: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(998e9588-4f8a-4c36-bffc-169b133e589e) Jan 29 19:19:37.956: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.956: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 19:19:37.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:19:37.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 19:19:37.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container volume-snapshot-controller Jan 29 19:19:37.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container volume-snapshot-controller Jan 29 19:19:37.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container volume-snapshot-controller Jan 29 19:19:37.956: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 19:19:37.956 (61ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 19:19:37.956 Jan 29 19:19:37.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 19:19:38.001 (45ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 19:19:38.001 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 19:19:38.001 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 19:19:38.001 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 19:19:38.001 STEP: Collecting events from namespace "reboot-6953". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 19:19:38.001 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 19:19:38.042 Jan 29 19:19:38.083: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 19:19:38.083: INFO: Jan 29 19:19:38.128: INFO: Logging node info for node bootstrap-e2e-master Jan 29 19:19:38.170: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 6d594531-bf60-4169-a952-1435da6f1f19 2476 0 2023-01-29 18:58:01 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 19:18:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 19:18:44 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 19:18:44 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 19:18:44 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 19:18:44 +0000 UTC,LastTransitionTime:2023-01-29 18:58:02 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.227.160.185,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:715ad78430040f7d6ba514abe5aaad49,SystemUUID:715ad784-3004-0f7d-6ba5-14abe5aaad49,BootID:68c04943-fcd4-4db6-91f3-becf325d9eb5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:19:38.171: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 19:19:38.217: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 19:19:38.293: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 18:57:17 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.293: INFO: Container kube-controller-manager ready: true, restart count 7 Jan 29 19:19:38.293: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 18:57:17 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.293: INFO: Container etcd-container ready: true, restart count 1 Jan 29 19:19:38.293: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 18:57:17 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.293: INFO: Container etcd-container ready: true, restart count 2 Jan 29 19:19:38.293: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 18:57:17 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.293: INFO: Container kube-apiserver ready: true, restart count 3 Jan 29 19:19:38.293: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 18:57:34 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.293: INFO: Container kube-addon-manager ready: true, restart count 2 Jan 29 19:19:38.293: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 18:57:34 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.293: INFO: Container l7-lb-controller ready: true, restart count 7 Jan 29 19:19:38.293: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 18:57:17 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.293: INFO: Container kube-scheduler ready: false, restart count 5 Jan 29 19:19:38.293: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 18:57:17 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.293: INFO: Container konnectivity-server-container ready: true, restart count 2 Jan 29 19:19:38.293: INFO: metadata-proxy-v0.1-bsd85 started at 2023-01-29 18:58:01 +0000 UTC (0+2 container statuses recorded) Jan 29 19:19:38.293: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 19:19:38.293: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 19:19:38.471: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 19:19:38.471: INFO: Logging node info for node bootstrap-e2e-minion-group-6j12 Jan 29 19:19:38.513: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6j12 ab88abcc-a824-4e7b-91d9-e5b55ca7b07b 2580 0 2023-01-29 18:58:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6j12 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 19:13:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 19:15:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 19:19:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 19:19:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-minion-group-6j12,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:23 +0000 UTC,LastTransitionTime:2023-01-29 19:19:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:23 +0000 UTC,LastTransitionTime:2023-01-29 19:19:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:23 +0000 UTC,LastTransitionTime:2023-01-29 19:19:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 19:19:23 +0000 UTC,LastTransitionTime:2023-01-29 19:19:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.40.177,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6j12.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6j12.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:533e501db67cda40a67ec8f66182930e,SystemUUID:533e501d-b67c-da40-a67e-c8f66182930e,BootID:2fabb178-4b4e-4a6c-9089-f906d84a1938,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:19:38.513: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6j12 Jan 29 19:19:38.559: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6j12 Jan 29 19:19:38.623: INFO: kube-dns-autoscaler-5f6455f985-sqslx started at 2023-01-29 18:58:18 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.623: INFO: Container autoscaler ready: true, restart count 3 Jan 29 19:19:38.623: INFO: coredns-6846b5b5f-vf6r6 started at 2023-01-29 18:58:18 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.623: INFO: Container coredns ready: true, restart count 7 Jan 29 19:19:38.623: INFO: metadata-proxy-v0.1-69vb9 started at 2023-01-29 18:58:06 +0000 UTC (0+2 container statuses recorded) Jan 29 19:19:38.623: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 19:19:38.623: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 19:19:38.623: INFO: konnectivity-agent-2vqtg started at 2023-01-29 18:58:18 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.623: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 29 19:19:38.623: INFO: kube-proxy-bootstrap-e2e-minion-group-6j12 started at 2023-01-29 18:58:05 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.623: INFO: Container kube-proxy ready: true, restart count 6 Jan 29 19:19:38.803: INFO: Latency metrics for node bootstrap-e2e-minion-group-6j12 Jan 29 19:19:38.803: INFO: Logging node info for node bootstrap-e2e-minion-group-kbdq Jan 29 19:19:38.846: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-kbdq c88d547b-ac1b-48a3-9f38-f761a4792a9d 2537 0 2023-01-29 18:58:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-kbdq kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 19:13:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 19:15:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 19:19:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 19:19:23 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-minion-group-kbdq,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 19:15:25 +0000 UTC,LastTransitionTime:2023-01-29 19:15:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:23 +0000 UTC,LastTransitionTime:2023-01-29 19:19:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:23 +0000 UTC,LastTransitionTime:2023-01-29 19:19:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:23 +0000 UTC,LastTransitionTime:2023-01-29 19:19:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 19:19:23 +0000 UTC,LastTransitionTime:2023-01-29 19:19:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.183.142,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-kbdq.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-kbdq.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f85a1ba151054485449fa0d667f3e53e,SystemUUID:f85a1ba1-5105-4485-449f-a0d667f3e53e,BootID:a74596f7-0e7e-4274-8a05-ac891407debe,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:19:38.846: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-kbdq Jan 29 19:19:38.892: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-kbdq Jan 29 19:19:38.959: INFO: metadata-proxy-v0.1-sxj7d started at 2023-01-29 18:58:07 +0000 UTC (0+2 container statuses recorded) Jan 29 19:19:38.959: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 19:19:38.959: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 19:19:38.959: INFO: konnectivity-agent-sl29q started at 2023-01-29 18:58:18 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.959: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 29 19:19:38.959: INFO: coredns-6846b5b5f-xqdgk started at 2023-01-29 18:58:22 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.959: INFO: Container coredns ready: true, restart count 4 Jan 29 19:19:38.959: INFO: metrics-server-v0.5.2-867b8754b9-rbv42 started at 2023-01-29 18:58:31 +0000 UTC (0+2 container statuses recorded) Jan 29 19:19:38.959: INFO: Container metrics-server ready: false, restart count 7 Jan 29 19:19:38.959: INFO: Container metrics-server-nanny ready: false, restart count 6 Jan 29 19:19:38.959: INFO: kube-proxy-bootstrap-e2e-minion-group-kbdq started at 2023-01-29 18:58:06 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:38.959: INFO: Container kube-proxy ready: true, restart count 4 Jan 29 19:19:39.128: INFO: Latency metrics for node bootstrap-e2e-minion-group-kbdq Jan 29 19:19:39.128: INFO: Logging node info for node bootstrap-e2e-minion-group-zmlw Jan 29 19:19:39.180: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-zmlw e228bd00-93a0-454f-b62d-2a81447198ac 2607 0 2023-01-29 18:58:02 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-zmlw kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 19:13:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 19:14:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 19:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 19:19:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-minion-group-zmlw,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 19:14:53 +0000 UTC,LastTransitionTime:2023-01-29 19:14:52 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 19:14:53 +0000 UTC,LastTransitionTime:2023-01-29 19:14:52 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 19:14:53 +0000 UTC,LastTransitionTime:2023-01-29 19:14:52 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 19:14:53 +0000 UTC,LastTransitionTime:2023-01-29 19:14:52 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 19:14:53 +0000 UTC,LastTransitionTime:2023-01-29 19:14:52 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 19:14:53 +0000 UTC,LastTransitionTime:2023-01-29 19:14:52 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 19:14:53 +0000 UTC,LastTransitionTime:2023-01-29 19:14:52 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:33 +0000 UTC,LastTransitionTime:2023-01-29 19:19:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:33 +0000 UTC,LastTransitionTime:2023-01-29 19:19:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 19:19:33 +0000 UTC,LastTransitionTime:2023-01-29 19:19:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 19:19:33 +0000 UTC,LastTransitionTime:2023-01-29 19:19:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.185.251.137,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-zmlw.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-zmlw.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:edebb6d4adaefd8f58c1a37613cc5a13,SystemUUID:edebb6d4-adae-fd8f-58c1-a37613cc5a13,BootID:1e777854-9a8e-44b3-9035-d54a4da76007,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:19:39.181: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-zmlw Jan 29 19:19:39.233: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-zmlw Jan 29 19:19:39.301: INFO: kube-proxy-bootstrap-e2e-minion-group-zmlw started at 2023-01-29 18:58:02 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:39.301: INFO: Container kube-proxy ready: true, restart count 8 Jan 29 19:19:39.301: INFO: l7-default-backend-8549d69d99-ch8vf started at 2023-01-29 18:58:18 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:39.301: INFO: Container default-http-backend ready: false, restart count 3 Jan 29 19:19:39.301: INFO: volume-snapshot-controller-0 started at 2023-01-29 18:58:18 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:39.301: INFO: Container volume-snapshot-controller ready: true, restart count 9 Jan 29 19:19:39.301: INFO: metadata-proxy-v0.1-k4wx2 started at 2023-01-29 18:58:03 +0000 UTC (0+2 container statuses recorded) Jan 29 19:19:39.301: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 19:19:39.301: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 19:19:39.301: INFO: konnectivity-agent-86td2 started at 2023-01-29 18:58:18 +0000 UTC (0+1 container statuses recorded) Jan 29 19:19:39.301: INFO: Container konnectivity-agent ready: false, restart count 7 Jan 29 19:20:14.794: INFO: Latency metrics for node bootstrap-e2e-minion-group-zmlw END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 19:20:14.794 (36.793s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 19:20:14.794 (36.794s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 19:20:14.794 STEP: Destroying namespace "reboot-6953" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 19:20:14.794 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 19:20:14.839 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 19:20:14.839 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 19:20:14.839 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 19:04:06.148from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 19:01:57.202 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 19:01:57.202 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 19:01:57.202 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 19:01:57.202 Jan 29 19:01:57.202: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 19:01:57.204 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 19:01:57.332 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 19:01:57.413 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 19:01:57.494 (292ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 19:01:57.494 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 19:01:57.494 (0s) > Enter [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/29/23 19:01:57.494 Jan 29 19:01:57.588: INFO: Getting bootstrap-e2e-minion-group-kbdq Jan 29 19:01:57.588: INFO: Getting bootstrap-e2e-minion-group-zmlw Jan 29 19:01:57.588: INFO: Getting bootstrap-e2e-minion-group-6j12 Jan 29 19:01:57.661: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-zmlw condition Ready to be true Jan 29 19:01:57.661: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-6j12 condition Ready to be true Jan 29 19:01:57.662: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-kbdq condition Ready to be true Jan 29 19:01:57.704: INFO: Node bootstrap-e2e-minion-group-6j12 has 3 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12 metadata-proxy-v0.1-69vb9] Jan 29 19:01:57.704: INFO: Node bootstrap-e2e-minion-group-zmlw has 3 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-zmlw metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0] Jan 29 19:01:57.704: INFO: Waiting up to 5m0s for 3 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12 metadata-proxy-v0.1-69vb9] Jan 29 19:01:57.704: INFO: Waiting up to 5m0s for 3 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-zmlw metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0] Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-69vb9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-zmlw" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-k4wx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-6j12" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-sqslx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:01:57.705: INFO: Node bootstrap-e2e-minion-group-kbdq has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-sxj7d" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-kbdq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:01:57.750: INFO: Pod "metadata-proxy-v0.1-69vb9": Phase="Running", Reason="", readiness=true. Elapsed: 45.396657ms Jan 29 19:01:57.750: INFO: Pod "metadata-proxy-v0.1-69vb9" satisfied condition "running and ready, or succeeded" Jan 29 19:01:57.752: INFO: Pod "kube-dns-autoscaler-5f6455f985-sqslx": Phase="Running", Reason="", readiness=true. Elapsed: 47.381942ms Jan 29 19:01:57.752: INFO: Pod "kube-dns-autoscaler-5f6455f985-sqslx" satisfied condition "running and ready, or succeeded" Jan 29 19:01:57.752: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 47.68288ms Jan 29 19:01:57.752: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 19:01:57.752: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=true. Elapsed: 47.67107ms Jan 29 19:01:57.752: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw" satisfied condition "running and ready, or succeeded" Jan 29 19:01:57.752: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6j12": Phase="Running", Reason="", readiness=true. Elapsed: 47.65663ms Jan 29 19:01:57.752: INFO: Pod "metadata-proxy-v0.1-k4wx2": Phase="Running", Reason="", readiness=true. Elapsed: 47.690392ms Jan 29 19:01:57.752: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6j12" satisfied condition "running and ready, or succeeded" Jan 29 19:01:57.752: INFO: Pod "metadata-proxy-v0.1-k4wx2" satisfied condition "running and ready, or succeeded" Jan 29 19:01:57.752: INFO: Wanted all 3 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12 metadata-proxy-v0.1-69vb9] Jan 29 19:01:57.752: INFO: Getting external IP address for bootstrap-e2e-minion-group-6j12 Jan 29 19:01:57.752: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-6j12(34.82.40.177:22) Jan 29 19:01:57.752: INFO: Wanted all 3 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-zmlw metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0] Jan 29 19:01:57.752: INFO: Getting external IP address for bootstrap-e2e-minion-group-zmlw Jan 29 19:01:57.752: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-zmlw(35.185.251.137:22) Jan 29 19:01:57.754: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kbdq": Phase="Running", Reason="", readiness=true. Elapsed: 48.279624ms Jan 29 19:01:57.754: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kbdq" satisfied condition "running and ready, or succeeded" Jan 29 19:01:57.754: INFO: Pod "metadata-proxy-v0.1-sxj7d": Phase="Running", Reason="", readiness=true. Elapsed: 48.437021ms Jan 29 19:01:57.754: INFO: Pod "metadata-proxy-v0.1-sxj7d" satisfied condition "running and ready, or succeeded" Jan 29 19:01:57.754: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 19:01:57.754: INFO: Getting external IP address for bootstrap-e2e-minion-group-kbdq Jan 29 19:01:57.754: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-kbdq(34.168.183.142:22) Jan 29 19:02:05.464: INFO: ssh prow@34.168.183.142:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 19:02:05.464: INFO: ssh prow@34.168.183.142:22: stdout: "" Jan 29 19:02:05.464: INFO: ssh prow@34.168.183.142:22: stderr: "" Jan 29 19:02:05.464: INFO: ssh prow@34.168.183.142:22: exit code: 0 Jan 29 19:02:05.464: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-kbdq condition Ready to be false Jan 29 19:02:05.471: INFO: ssh prow@35.185.251.137:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 19:02:05.471: INFO: ssh prow@35.185.251.137:22: stdout: "" Jan 29 19:02:05.471: INFO: ssh prow@35.185.251.137:22: stderr: "" Jan 29 19:02:05.471: INFO: ssh prow@35.185.251.137:22: exit code: 0 Jan 29 19:02:05.471: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-zmlw condition Ready to be false Jan 29 19:02:05.472: INFO: ssh prow@34.82.40.177:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 19:02:05.472: INFO: ssh prow@34.82.40.177:22: stdout: "" Jan 29 19:02:05.472: INFO: ssh prow@34.82.40.177:22: stderr: "" Jan 29 19:02:05.472: INFO: ssh prow@34.82.40.177:22: exit code: 0 Jan 29 19:02:05.472: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-6j12 condition Ready to be false Jan 29 19:02:05.506: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:05.514: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:05.514: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:07.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:07.559: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:07.559: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:09.592: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:09.603: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:09.603: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:11.635: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:11.646: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:11.647: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:13.678: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:13.691: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:13.691: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:15.736: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:15.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:15.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:17.780: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:17.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:17.788: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:19.822: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:19.833: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:19.833: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:21.864: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:21.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:21.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:23.908: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:23.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:23.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:25.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:25.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:25.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:27.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:28.010: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:28.010: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:30.037: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:30.055: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:30.055: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:32.081: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:32.101: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:32.101: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:34.126: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:34.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:34.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:36.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:36.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:36.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:38.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:38.233: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:38.234: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:40.255: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:40.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:40.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:42.297: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:42.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:42.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:44.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:44.370: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:44.370: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:46.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:46.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:46.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:48.427: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:48.480: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:48.480: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:50.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:50.524: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:50.524: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:52.514: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:52.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:52.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:54.557: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:54.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:54.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:56.601: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:56.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:56.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:58.643: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:58.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:58.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:00.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:00.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:00.748: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:02.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:02.794: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:02.794: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:04.774: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:04.840: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:04.840: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:06.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:06.885: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:06.885: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:08.860: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:08.930: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:08.930: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:10.905: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:10.975: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:10.975: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:12.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:13.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:13.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:14.993: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:15.062: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:15.062: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:17.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:17.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:17.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:19.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:19.149: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:19.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:21.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:21.193: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:21.194: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:23.167: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:23.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:23.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:25.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:25.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:25.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:27.256: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:27.331: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:27.331: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:29.299: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:29.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:29.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:31.343: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:31.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:31.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:33.386: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:33.469: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:33.469: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:35.431: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:35.519: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:35.519: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:37.491: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:37.567: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:37.567: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:39.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:39.611: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:39.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:41.577: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:41.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:41.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:43.622: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:43.698: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:43.700: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:45.664: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:45.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:45.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:47.707: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:47.788: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:47.788: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:49.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:49.831: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:49.833: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:51.793: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:51.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:51.876: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:53.838: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:53.922: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:53.922: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:55.882: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:55.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:55.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:57.926: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:58.011: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:58.011: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:59.975: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:00.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:00.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:02.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:02.099: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:02.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:04.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:04.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:04.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:06.066: INFO: Node bootstrap-e2e-minion-group-kbdq didn't reach desired Ready condition status (false) within 2m0s Jan 29 19:04:06.147: INFO: Node bootstrap-e2e-minion-group-zmlw didn't reach desired Ready condition status (false) within 2m0s Jan 29 19:04:06.147: INFO: Node bootstrap-e2e-minion-group-6j12 didn't reach desired Ready condition status (false) within 2m0s Jan 29 19:04:06.147: INFO: Node bootstrap-e2e-minion-group-6j12 failed reboot test. Jan 29 19:04:06.147: INFO: Node bootstrap-e2e-minion-group-kbdq failed reboot test. Jan 29 19:04:06.147: INFO: Node bootstrap-e2e-minion-group-zmlw failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 19:04:06.148 < Exit [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/29/23 19:04:06.148 (2m8.654s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 19:04:06.148 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 19:04:06.148 Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-vf6r6 to bootstrap-e2e-minion-group-6j12 Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.433613505s (1.433635054s including waiting) Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container coredns Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container coredns Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container coredns Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Readiness probe failed: Get "http://10.64.2.5:8181/ready": dial tcp 10.64.2.5:8181: connect: connection refused Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-xqdgk to bootstrap-e2e-minion-group-kbdq Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 990.09151ms (990.109933ms including waiting) Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container coredns Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container coredns Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container coredns Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "http://10.64.3.3:8181/ready": dial tcp 10.64.3.3:8181: connect: connection refused Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-vf6r6 Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-xqdgk Jan 29 19:04:06.198: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 19:04:06.198: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 19:04:06.198: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 19:04:06.198: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 19:04:06.198: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 19:04:06.198: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 19:04:06.198: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.198: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 19:04:06.198: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 19:04:06.198: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 19:04:06.198: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 19:04:06.198: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.198: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 19:04:06.198: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3683c became leader Jan 29 19:04:06.198: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_67c56 became leader Jan 29 19:04:06.198: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_df769 became leader Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-2vqtg to bootstrap-e2e-minion-group-6j12 Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 954.093152ms (954.103201ms including waiting) Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container konnectivity-agent Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container konnectivity-agent Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container konnectivity-agent Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-2vqtg_kube-system(9b972156-4678-407b-bae6-cbb0320f2268) Jan 29 19:04:06.198: INFO: event for konnectivity-agent-86td2: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-86td2 to bootstrap-e2e-minion-group-zmlw Jan 29 19:04:06.198: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:04:06.199: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 898.19014ms (898.205304ms including waiting) Jan 29 19:04:06.199: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container konnectivity-agent Jan 29 19:04:06.199: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container konnectivity-agent Jan 29 19:04:06.199: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container konnectivity-agent Jan 29 19:04:06.199: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:04:06.199: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-86td2_kube-system(69719ba2-5e8c-4fb5-851f-01aacdebb1fe) Jan 29 19:04:06.199: INFO: event for konnectivity-agent-sl29q: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-sl29q to bootstrap-e2e-minion-group-kbdq Jan 29 19:04:06.199: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:04:06.199: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 634.905196ms (634.917128ms including waiting) Jan 29 19:04:06.199: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container konnectivity-agent Jan 29 19:04:06.199: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container konnectivity-agent Jan 29 19:04:06.199: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container konnectivity-agent Jan 29 19:04:06.199: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:04:06.199: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-2vqtg Jan 29 19:04:06.199: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-86td2 Jan 29 19:04:06.199: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-sl29q Jan 29 19:04:06.199: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 19:04:06.199: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 19:04:06.199: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 19:04:06.199: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 19:04:06.199: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 19:04:06.199: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:04:06.199: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 19:04:06.199: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 19:04:06.199: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 19:04:06.199: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 19:04:06.199: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_950167c8-36b9-42df-8a85-3a9d28c53b4d became leader Jan 29 19:04:06.199: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8362f635-12b0-418d-8264-942880514a9e became leader Jan 29 19:04:06.199: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_1c918cd0-bdd9-4406-82a9-d0c9fd5f6aa2 became leader Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-sqslx to bootstrap-e2e-minion-group-6j12 Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.376080383s (1.376088044s including waiting) Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container autoscaler Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container autoscaler Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-sqslx Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6j12_kube-system(4b09de720b01bf61ad28571efe2a195a) Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-zmlw_kube-system(f79ee35ecf1fb040fbeb5b8a84a1dcae) Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:04:06.199: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:04:06.199: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 19:04:06.199: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 19:04:06.199: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 29 19:04:06.199: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 19:04:06.199: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 19:04:06.199: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_fc11fe53-5cf0-4193-a2bb-e6c9362442ab became leader Jan 29 19:04:06.199: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_01571e77-c85b-4452-a422-92094f674352 became leader Jan 29 19:04:06.199: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_3563e5ef-6b74-4b5f-aaae-be9535c8b370 became leader Jan 29 19:04:06.199: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:04:06.199: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:04:06.199: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-ch8vf to bootstrap-e2e-minion-group-zmlw Jan 29 19:04:06.199: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 19:04:06.199: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 524.239661ms (524.253716ms including waiting) Jan 29 19:04:06.199: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container default-http-backend Jan 29 19:04:06.199: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container default-http-backend Jan 29 19:04:06.199: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-ch8vf Jan 29 19:04:06.199: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 19:04:06.199: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 19:04:06.199: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 19:04:06.199: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 19:04:06.199: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-69vb9 to bootstrap-e2e-minion-group-6j12 Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 819.373409ms (819.391143ms including waiting) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metadata-proxy Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metadata-proxy Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.793896744s (1.793906041s including waiting) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container prometheus-to-sd-exporter Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container prometheus-to-sd-exporter Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bsd85 to bootstrap-e2e-master Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 733.670101ms (733.681792ms including waiting) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.802128586s (1.802140747s including waiting) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-k4wx2 to bootstrap-e2e-minion-group-zmlw Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 714.06682ms (714.080021ms including waiting) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container metadata-proxy Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container metadata-proxy Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.785588602s (1.785596591s including waiting) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container prometheus-to-sd-exporter Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container prometheus-to-sd-exporter Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-sxj7d to bootstrap-e2e-minion-group-kbdq Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 714.105616ms (714.11794ms including waiting) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metadata-proxy Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metadata-proxy Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.882455818s (1.882464632s including waiting) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container prometheus-to-sd-exporter Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container prometheus-to-sd-exporter Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bsd85 Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-k4wx2 Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-69vb9 Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-sxj7d Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-57s7b to bootstrap-e2e-minion-group-6j12 Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.867162036s (1.867179734s including waiting) Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metrics-server Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metrics-server Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.143065018s (1.143075491s including waiting) Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metrics-server-nanny Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metrics-server-nanny Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container metrics-server Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container metrics-server-nanny Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-57s7b Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-57s7b Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-rbv42 to bootstrap-e2e-minion-group-kbdq Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.329430274s (1.329453807s including waiting) Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 999.838364ms (999.850042ms including waiting) Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server-nanny Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server-nanny Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.4:10250/readyz": dial tcp 10.64.3.4:10250: connect: connection refused Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "https://10.64.3.4:10250/livez": dial tcp 10.64.3.4:10250: connect: connection refused Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "https://10.64.3.4:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container metrics-server Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container metrics-server-nanny Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Container metrics-server failed liveness probe, will be restarted Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Failed: Error: failed to get sandbox container task: no running task found: task 9b8fcc9e9e402a3c97e0f4aec77203618c2c01ccfd4d4d09a7ae88ba7b697e9a not found: not found Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-rbv42 Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-zmlw Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 1.399668429s (1.399675942s including waiting) Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container volume-snapshot-controller Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container volume-snapshot-controller Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container volume-snapshot-controller Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(998e9588-4f8a-4c36-bffc-169b133e589e) Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 19:04:06.199 (51ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 19:04:06.199 Jan 29 19:04:06.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 19:04:06.242 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 19:04:06.242 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 19:04:06.242 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 19:04:06.242 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 19:04:06.242 STEP: Collecting events from namespace "reboot-2882". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 19:04:06.242 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 19:04:06.283 Jan 29 19:04:06.324: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 19:04:06.324: INFO: Jan 29 19:04:06.371: INFO: Logging node info for node bootstrap-e2e-master Jan 29 19:04:06.414: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 6d594531-bf60-4169-a952-1435da6f1f19 1159 0 2023-01-29 18:58:01 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 19:03:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 19:03:28 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 19:03:28 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 19:03:28 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 19:03:28 +0000 UTC,LastTransitionTime:2023-01-29 18:58:02 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.227.160.185,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:715ad78430040f7d6ba514abe5aaad49,SystemUUID:715ad784-3004-0f7d-6ba5-14abe5aaad49,BootID:68c04943-fcd4-4db6-91f3-becf325d9eb5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:04:06.415: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 19:04:06.458: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 19:04:06.502: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Jan 29 19:04:06.502: INFO: Logging node info for node bootstrap-e2e-minion-group-6j12 Jan 29 19:04:06.544: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6j12 ab88abcc-a824-4e7b-91d9-e5b55ca7b07b 1152 0 2023-01-29 18:58:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6j12 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-01-29 18:58:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 18:58:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 18:58:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2023-01-29 19:03:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-minion-group-6j12,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:35 +0000 UTC,LastTransitionTime:2023-01-29 18:58:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:35 +0000 UTC,LastTransitionTime:2023-01-29 18:58:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:35 +0000 UTC,LastTransitionTime:2023-01-29 18:58:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:58:35 +0000 UTC,LastTransitionTime:2023-01-29 18:58:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.40.177,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6j12.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6j12.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:533e501db67cda40a67ec8f66182930e,SystemUUID:533e501d-b67c-da40-a67e-c8f66182930e,BootID:7cadddf6-de30-4659-92da-b0bad3394bd4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:04:06.544: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6j12 Jan 29 19:04:06.587: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6j12 Jan 29 19:04:06.630: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6j12: error trying to reach service: No agent available Jan 29 19:04:06.630: INFO: Logging node info for node bootstrap-e2e-minion-group-kbdq Jan 29 19:04:06.672: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-kbdq c88d547b-ac1b-48a3-9f38-f761a4792a9d 1157 0 2023-01-29 18:58:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-kbdq kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 18:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2023-01-29 19:03:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-minion-group-kbdq,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 19:03:27 +0000 UTC,LastTransitionTime:2023-01-29 19:03:26 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 19:03:27 +0000 UTC,LastTransitionTime:2023-01-29 19:03:26 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 19:03:27 +0000 UTC,LastTransitionTime:2023-01-29 19:03:26 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 19:03:27 +0000 UTC,LastTransitionTime:2023-01-29 19:03:26 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:27 +0000 UTC,LastTransitionTime:2023-01-29 19:03:26 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:27 +0000 UTC,LastTransitionTime:2023-01-29 19:03:26 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:27 +0000 UTC,LastTransitionTime:2023-01-29 19:03:26 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:59:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:59:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:59:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:59:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.183.142,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-kbdq.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-kbdq.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f85a1ba151054485449fa0d667f3e53e,SystemUUID:f85a1ba1-5105-4485-449f-a0d667f3e53e,BootID:7f2871d2-e9bf-4efa-98a3-73903aa33d68,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:04:06.672: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-kbdq Jan 29 19:04:06.715: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-kbdq Jan 29 19:04:06.759: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-kbdq: error trying to reach service: No agent available Jan 29 19:04:06.759: INFO: Logging node info for node bootstrap-e2e-minion-group-zmlw Jan 29 19:04:06.801: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-zmlw e228bd00-93a0-454f-b62d-2a81447198ac 1150 0 2023-01-29 18:58:02 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-zmlw kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 18:58:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2023-01-29 19:03:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-minion-group-zmlw,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:33 +0000 UTC,LastTransitionTime:2023-01-29 18:58:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:33 +0000 UTC,LastTransitionTime:2023-01-29 18:58:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:33 +0000 UTC,LastTransitionTime:2023-01-29 18:58:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:58:33 +0000 UTC,LastTransitionTime:2023-01-29 18:58:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.185.251.137,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-zmlw.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-zmlw.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:edebb6d4adaefd8f58c1a37613cc5a13,SystemUUID:edebb6d4-adae-fd8f-58c1-a37613cc5a13,BootID:edd86694-6db8-41e1-b532-ff776863141f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:04:06.801: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-zmlw Jan 29 19:04:06.844: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-zmlw Jan 29 19:04:06.887: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-zmlw: error trying to reach service: No agent available END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 19:04:06.887 (646ms) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 19:04:06.887 (646ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 19:04:06.887 STEP: Destroying namespace "reboot-2882" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 19:04:06.887 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 19:04:06.93 (43ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 19:04:06.931 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 19:04:06.931 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 19:04:06.148from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 19:01:57.202 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 19:01:57.202 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 19:01:57.202 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 19:01:57.202 Jan 29 19:01:57.202: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 19:01:57.204 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 19:01:57.332 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 19:01:57.413 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 19:01:57.494 (292ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 19:01:57.494 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 19:01:57.494 (0s) > Enter [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/29/23 19:01:57.494 Jan 29 19:01:57.588: INFO: Getting bootstrap-e2e-minion-group-kbdq Jan 29 19:01:57.588: INFO: Getting bootstrap-e2e-minion-group-zmlw Jan 29 19:01:57.588: INFO: Getting bootstrap-e2e-minion-group-6j12 Jan 29 19:01:57.661: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-zmlw condition Ready to be true Jan 29 19:01:57.661: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-6j12 condition Ready to be true Jan 29 19:01:57.662: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-kbdq condition Ready to be true Jan 29 19:01:57.704: INFO: Node bootstrap-e2e-minion-group-6j12 has 3 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12 metadata-proxy-v0.1-69vb9] Jan 29 19:01:57.704: INFO: Node bootstrap-e2e-minion-group-zmlw has 3 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-zmlw metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0] Jan 29 19:01:57.704: INFO: Waiting up to 5m0s for 3 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12 metadata-proxy-v0.1-69vb9] Jan 29 19:01:57.704: INFO: Waiting up to 5m0s for 3 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-zmlw metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0] Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-69vb9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-zmlw" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-k4wx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-6j12" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-sqslx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:01:57.705: INFO: Node bootstrap-e2e-minion-group-kbdq has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-sxj7d" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:01:57.705: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-kbdq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 19:01:57.750: INFO: Pod "metadata-proxy-v0.1-69vb9": Phase="Running", Reason="", readiness=true. Elapsed: 45.396657ms Jan 29 19:01:57.750: INFO: Pod "metadata-proxy-v0.1-69vb9" satisfied condition "running and ready, or succeeded" Jan 29 19:01:57.752: INFO: Pod "kube-dns-autoscaler-5f6455f985-sqslx": Phase="Running", Reason="", readiness=true. Elapsed: 47.381942ms Jan 29 19:01:57.752: INFO: Pod "kube-dns-autoscaler-5f6455f985-sqslx" satisfied condition "running and ready, or succeeded" Jan 29 19:01:57.752: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 47.68288ms Jan 29 19:01:57.752: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 19:01:57.752: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=true. Elapsed: 47.67107ms Jan 29 19:01:57.752: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw" satisfied condition "running and ready, or succeeded" Jan 29 19:01:57.752: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6j12": Phase="Running", Reason="", readiness=true. Elapsed: 47.65663ms Jan 29 19:01:57.752: INFO: Pod "metadata-proxy-v0.1-k4wx2": Phase="Running", Reason="", readiness=true. Elapsed: 47.690392ms Jan 29 19:01:57.752: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6j12" satisfied condition "running and ready, or succeeded" Jan 29 19:01:57.752: INFO: Pod "metadata-proxy-v0.1-k4wx2" satisfied condition "running and ready, or succeeded" Jan 29 19:01:57.752: INFO: Wanted all 3 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12 metadata-proxy-v0.1-69vb9] Jan 29 19:01:57.752: INFO: Getting external IP address for bootstrap-e2e-minion-group-6j12 Jan 29 19:01:57.752: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-6j12(34.82.40.177:22) Jan 29 19:01:57.752: INFO: Wanted all 3 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-zmlw metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0] Jan 29 19:01:57.752: INFO: Getting external IP address for bootstrap-e2e-minion-group-zmlw Jan 29 19:01:57.752: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-zmlw(35.185.251.137:22) Jan 29 19:01:57.754: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kbdq": Phase="Running", Reason="", readiness=true. Elapsed: 48.279624ms Jan 29 19:01:57.754: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kbdq" satisfied condition "running and ready, or succeeded" Jan 29 19:01:57.754: INFO: Pod "metadata-proxy-v0.1-sxj7d": Phase="Running", Reason="", readiness=true. Elapsed: 48.437021ms Jan 29 19:01:57.754: INFO: Pod "metadata-proxy-v0.1-sxj7d" satisfied condition "running and ready, or succeeded" Jan 29 19:01:57.754: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 19:01:57.754: INFO: Getting external IP address for bootstrap-e2e-minion-group-kbdq Jan 29 19:01:57.754: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-kbdq(34.168.183.142:22) Jan 29 19:02:05.464: INFO: ssh prow@34.168.183.142:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 19:02:05.464: INFO: ssh prow@34.168.183.142:22: stdout: "" Jan 29 19:02:05.464: INFO: ssh prow@34.168.183.142:22: stderr: "" Jan 29 19:02:05.464: INFO: ssh prow@34.168.183.142:22: exit code: 0 Jan 29 19:02:05.464: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-kbdq condition Ready to be false Jan 29 19:02:05.471: INFO: ssh prow@35.185.251.137:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 19:02:05.471: INFO: ssh prow@35.185.251.137:22: stdout: "" Jan 29 19:02:05.471: INFO: ssh prow@35.185.251.137:22: stderr: "" Jan 29 19:02:05.471: INFO: ssh prow@35.185.251.137:22: exit code: 0 Jan 29 19:02:05.471: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-zmlw condition Ready to be false Jan 29 19:02:05.472: INFO: ssh prow@34.82.40.177:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 19:02:05.472: INFO: ssh prow@34.82.40.177:22: stdout: "" Jan 29 19:02:05.472: INFO: ssh prow@34.82.40.177:22: stderr: "" Jan 29 19:02:05.472: INFO: ssh prow@34.82.40.177:22: exit code: 0 Jan 29 19:02:05.472: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-6j12 condition Ready to be false Jan 29 19:02:05.506: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:05.514: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:05.514: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:07.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:07.559: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:07.559: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:09.592: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:09.603: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:09.603: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:11.635: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:11.646: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:11.647: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:13.678: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:13.691: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:13.691: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:15.736: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:15.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:15.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:17.780: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:17.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:17.788: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:19.822: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:19.833: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:19.833: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:21.864: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:21.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:21.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:23.908: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:23.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:23.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:25.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:25.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:25.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:27.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:28.010: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:28.010: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:30.037: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:30.055: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:30.055: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:32.081: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:32.101: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:32.101: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:34.126: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:34.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:34.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:36.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:36.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:36.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:38.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:38.233: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:38.234: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:40.255: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:40.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:40.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:42.297: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:42.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:42.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:44.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:44.370: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:44.370: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:46.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:46.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:46.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:48.427: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:48.480: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:48.480: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:50.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:50.524: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:50.524: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:52.514: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:52.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:52.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:54.557: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:54.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:54.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:56.601: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:56.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:56.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:58.643: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:58.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:02:58.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:00.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:00.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:00.748: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:02.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:02.794: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:02.794: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:04.774: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:04.840: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:04.840: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:06.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:06.885: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:06.885: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:08.860: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:08.930: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:08.930: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:10.905: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:10.975: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:10.975: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:12.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:13.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:13.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:14.993: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:15.062: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:15.062: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:17.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:17.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:17.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:19.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:19.149: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:19.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:21.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:21.193: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:21.194: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:23.167: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:23.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:23.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:25.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:25.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:25.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:27.256: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:27.331: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:27.331: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:29.299: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:29.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:29.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:31.343: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:31.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:31.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:33.386: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:33.469: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:33.469: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:35.431: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:35.519: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:35.519: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:37.491: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:37.567: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:37.567: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:39.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:39.611: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:39.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:41.577: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:41.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:41.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:43.622: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:43.698: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:43.700: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:45.664: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:45.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:45.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:47.707: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:47.788: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:47.788: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:49.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:49.831: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:49.833: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:51.793: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:51.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:51.876: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:53.838: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:53.922: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:53.922: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:55.882: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:55.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:55.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:57.926: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:58.011: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:58.011: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:03:59.975: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:00.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:00.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:02.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:02.099: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:02.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:04.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:04.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:04.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:04:06.066: INFO: Node bootstrap-e2e-minion-group-kbdq didn't reach desired Ready condition status (false) within 2m0s Jan 29 19:04:06.147: INFO: Node bootstrap-e2e-minion-group-zmlw didn't reach desired Ready condition status (false) within 2m0s Jan 29 19:04:06.147: INFO: Node bootstrap-e2e-minion-group-6j12 didn't reach desired Ready condition status (false) within 2m0s Jan 29 19:04:06.147: INFO: Node bootstrap-e2e-minion-group-6j12 failed reboot test. Jan 29 19:04:06.147: INFO: Node bootstrap-e2e-minion-group-kbdq failed reboot test. Jan 29 19:04:06.147: INFO: Node bootstrap-e2e-minion-group-zmlw failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 19:04:06.148 < Exit [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/29/23 19:04:06.148 (2m8.654s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 19:04:06.148 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 19:04:06.148 Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-vf6r6 to bootstrap-e2e-minion-group-6j12 Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.433613505s (1.433635054s including waiting) Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container coredns Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container coredns Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container coredns Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Readiness probe failed: Get "http://10.64.2.5:8181/ready": dial tcp 10.64.2.5:8181: connect: connection refused Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-xqdgk to bootstrap-e2e-minion-group-kbdq Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 990.09151ms (990.109933ms including waiting) Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container coredns Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container coredns Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container coredns Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "http://10.64.3.3:8181/ready": dial tcp 10.64.3.3:8181: connect: connection refused Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-vf6r6 Jan 29 19:04:06.198: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-xqdgk Jan 29 19:04:06.198: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 19:04:06.198: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 19:04:06.198: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 19:04:06.198: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 19:04:06.198: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 19:04:06.198: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 19:04:06.198: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.198: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 19:04:06.198: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 19:04:06.198: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 19:04:06.198: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 19:04:06.198: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.198: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 19:04:06.198: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3683c became leader Jan 29 19:04:06.198: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_67c56 became leader Jan 29 19:04:06.198: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_df769 became leader Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-2vqtg to bootstrap-e2e-minion-group-6j12 Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 954.093152ms (954.103201ms including waiting) Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container konnectivity-agent Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container konnectivity-agent Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container konnectivity-agent Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:04:06.198: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-2vqtg_kube-system(9b972156-4678-407b-bae6-cbb0320f2268) Jan 29 19:04:06.198: INFO: event for konnectivity-agent-86td2: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-86td2 to bootstrap-e2e-minion-group-zmlw Jan 29 19:04:06.198: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:04:06.199: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 898.19014ms (898.205304ms including waiting) Jan 29 19:04:06.199: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container konnectivity-agent Jan 29 19:04:06.199: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container konnectivity-agent Jan 29 19:04:06.199: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container konnectivity-agent Jan 29 19:04:06.199: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:04:06.199: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-86td2_kube-system(69719ba2-5e8c-4fb5-851f-01aacdebb1fe) Jan 29 19:04:06.199: INFO: event for konnectivity-agent-sl29q: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-sl29q to bootstrap-e2e-minion-group-kbdq Jan 29 19:04:06.199: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:04:06.199: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 634.905196ms (634.917128ms including waiting) Jan 29 19:04:06.199: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container konnectivity-agent Jan 29 19:04:06.199: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container konnectivity-agent Jan 29 19:04:06.199: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container konnectivity-agent Jan 29 19:04:06.199: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:04:06.199: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-2vqtg Jan 29 19:04:06.199: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-86td2 Jan 29 19:04:06.199: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-sl29q Jan 29 19:04:06.199: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 19:04:06.199: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 19:04:06.199: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 19:04:06.199: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 19:04:06.199: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 19:04:06.199: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:04:06.199: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 19:04:06.199: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 19:04:06.199: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 19:04:06.199: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 19:04:06.199: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_950167c8-36b9-42df-8a85-3a9d28c53b4d became leader Jan 29 19:04:06.199: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8362f635-12b0-418d-8264-942880514a9e became leader Jan 29 19:04:06.199: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_1c918cd0-bdd9-4406-82a9-d0c9fd5f6aa2 became leader Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-sqslx to bootstrap-e2e-minion-group-6j12 Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.376080383s (1.376088044s including waiting) Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container autoscaler Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container autoscaler Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-sqslx Jan 29 19:04:06.199: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6j12_kube-system(4b09de720b01bf61ad28571efe2a195a) Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container kube-proxy Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-zmlw_kube-system(f79ee35ecf1fb040fbeb5b8a84a1dcae) Jan 29 19:04:06.199: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:04:06.199: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:04:06.199: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 19:04:06.199: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 19:04:06.199: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 29 19:04:06.199: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 19:04:06.199: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 19:04:06.199: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_fc11fe53-5cf0-4193-a2bb-e6c9362442ab became leader Jan 29 19:04:06.199: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_01571e77-c85b-4452-a422-92094f674352 became leader Jan 29 19:04:06.199: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_3563e5ef-6b74-4b5f-aaae-be9535c8b370 became leader Jan 29 19:04:06.199: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:04:06.199: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:04:06.199: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-ch8vf to bootstrap-e2e-minion-group-zmlw Jan 29 19:04:06.199: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 19:04:06.199: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 524.239661ms (524.253716ms including waiting) Jan 29 19:04:06.199: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container default-http-backend Jan 29 19:04:06.199: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container default-http-backend Jan 29 19:04:06.199: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-ch8vf Jan 29 19:04:06.199: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 19:04:06.199: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 19:04:06.199: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 19:04:06.199: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 19:04:06.199: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-69vb9 to bootstrap-e2e-minion-group-6j12 Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 819.373409ms (819.391143ms including waiting) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metadata-proxy Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metadata-proxy Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.793896744s (1.793906041s including waiting) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container prometheus-to-sd-exporter Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container prometheus-to-sd-exporter Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bsd85 to bootstrap-e2e-master Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 733.670101ms (733.681792ms including waiting) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.802128586s (1.802140747s including waiting) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-k4wx2 to bootstrap-e2e-minion-group-zmlw Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 714.06682ms (714.080021ms including waiting) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container metadata-proxy Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container metadata-proxy Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.785588602s (1.785596591s including waiting) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container prometheus-to-sd-exporter Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container prometheus-to-sd-exporter Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-sxj7d to bootstrap-e2e-minion-group-kbdq Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 714.105616ms (714.11794ms including waiting) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metadata-proxy Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metadata-proxy Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.882455818s (1.882464632s including waiting) Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container prometheus-to-sd-exporter Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container prometheus-to-sd-exporter Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bsd85 Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-k4wx2 Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-69vb9 Jan 29 19:04:06.199: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-sxj7d Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-57s7b to bootstrap-e2e-minion-group-6j12 Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.867162036s (1.867179734s including waiting) Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metrics-server Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metrics-server Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.143065018s (1.143075491s including waiting) Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metrics-server-nanny Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metrics-server-nanny Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container metrics-server Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container metrics-server-nanny Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-57s7b Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-57s7b Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-rbv42 to bootstrap-e2e-minion-group-kbdq Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.329430274s (1.329453807s including waiting) Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 999.838364ms (999.850042ms including waiting) Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server-nanny Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server-nanny Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.4:10250/readyz": dial tcp 10.64.3.4:10250: connect: connection refused Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "https://10.64.3.4:10250/livez": dial tcp 10.64.3.4:10250: connect: connection refused Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "https://10.64.3.4:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container metrics-server Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container metrics-server-nanny Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Container metrics-server failed liveness probe, will be restarted Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Failed: Error: failed to get sandbox container task: no running task found: task 9b8fcc9e9e402a3c97e0f4aec77203618c2c01ccfd4d4d09a7ae88ba7b697e9a not found: not found Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-rbv42 Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 19:04:06.199: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-zmlw Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 1.399668429s (1.399675942s including waiting) Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container volume-snapshot-controller Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container volume-snapshot-controller Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container volume-snapshot-controller Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(998e9588-4f8a-4c36-bffc-169b133e589e) Jan 29 19:04:06.199: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 19:04:06.199 (51ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 19:04:06.199 Jan 29 19:04:06.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 19:04:06.242 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 19:04:06.242 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 19:04:06.242 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 19:04:06.242 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 19:04:06.242 STEP: Collecting events from namespace "reboot-2882". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 19:04:06.242 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 19:04:06.283 Jan 29 19:04:06.324: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 19:04:06.324: INFO: Jan 29 19:04:06.371: INFO: Logging node info for node bootstrap-e2e-master Jan 29 19:04:06.414: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 6d594531-bf60-4169-a952-1435da6f1f19 1159 0 2023-01-29 18:58:01 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 19:03:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 19:03:28 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 19:03:28 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 19:03:28 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 19:03:28 +0000 UTC,LastTransitionTime:2023-01-29 18:58:02 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.227.160.185,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:715ad78430040f7d6ba514abe5aaad49,SystemUUID:715ad784-3004-0f7d-6ba5-14abe5aaad49,BootID:68c04943-fcd4-4db6-91f3-becf325d9eb5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:04:06.415: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 19:04:06.458: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 19:04:06.502: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Jan 29 19:04:06.502: INFO: Logging node info for node bootstrap-e2e-minion-group-6j12 Jan 29 19:04:06.544: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6j12 ab88abcc-a824-4e7b-91d9-e5b55ca7b07b 1152 0 2023-01-29 18:58:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6j12 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-01-29 18:58:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 18:58:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 18:58:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2023-01-29 19:03:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-minion-group-6j12,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:35 +0000 UTC,LastTransitionTime:2023-01-29 18:58:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:35 +0000 UTC,LastTransitionTime:2023-01-29 18:58:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:35 +0000 UTC,LastTransitionTime:2023-01-29 18:58:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:58:35 +0000 UTC,LastTransitionTime:2023-01-29 18:58:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.40.177,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6j12.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6j12.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:533e501db67cda40a67ec8f66182930e,SystemUUID:533e501d-b67c-da40-a67e-c8f66182930e,BootID:7cadddf6-de30-4659-92da-b0bad3394bd4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:04:06.544: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6j12 Jan 29 19:04:06.587: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6j12 Jan 29 19:04:06.630: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6j12: error trying to reach service: No agent available Jan 29 19:04:06.630: INFO: Logging node info for node bootstrap-e2e-minion-group-kbdq Jan 29 19:04:06.672: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-kbdq c88d547b-ac1b-48a3-9f38-f761a4792a9d 1157 0 2023-01-29 18:58:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-kbdq kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 18:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2023-01-29 19:03:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-minion-group-kbdq,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 19:03:27 +0000 UTC,LastTransitionTime:2023-01-29 19:03:26 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 19:03:27 +0000 UTC,LastTransitionTime:2023-01-29 19:03:26 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 19:03:27 +0000 UTC,LastTransitionTime:2023-01-29 19:03:26 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 19:03:27 +0000 UTC,LastTransitionTime:2023-01-29 19:03:26 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:27 +0000 UTC,LastTransitionTime:2023-01-29 19:03:26 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:27 +0000 UTC,LastTransitionTime:2023-01-29 19:03:26 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:27 +0000 UTC,LastTransitionTime:2023-01-29 19:03:26 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:59:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:59:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:59:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:59:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.183.142,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-kbdq.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-kbdq.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f85a1ba151054485449fa0d667f3e53e,SystemUUID:f85a1ba1-5105-4485-449f-a0d667f3e53e,BootID:7f2871d2-e9bf-4efa-98a3-73903aa33d68,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:04:06.672: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-kbdq Jan 29 19:04:06.715: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-kbdq Jan 29 19:04:06.759: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-kbdq: error trying to reach service: No agent available Jan 29 19:04:06.759: INFO: Logging node info for node bootstrap-e2e-minion-group-zmlw Jan 29 19:04:06.801: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-zmlw e228bd00-93a0-454f-b62d-2a81447198ac 1150 0 2023-01-29 18:58:02 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-zmlw kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 18:58:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2023-01-29 19:03:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-minion-group-zmlw,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 19:03:25 +0000 UTC,LastTransitionTime:2023-01-29 19:03:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:33 +0000 UTC,LastTransitionTime:2023-01-29 18:58:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:33 +0000 UTC,LastTransitionTime:2023-01-29 18:58:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:33 +0000 UTC,LastTransitionTime:2023-01-29 18:58:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:58:33 +0000 UTC,LastTransitionTime:2023-01-29 18:58:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.185.251.137,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-zmlw.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-zmlw.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:edebb6d4adaefd8f58c1a37613cc5a13,SystemUUID:edebb6d4-adae-fd8f-58c1-a37613cc5a13,BootID:edd86694-6db8-41e1-b532-ff776863141f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:04:06.801: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-zmlw Jan 29 19:04:06.844: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-zmlw Jan 29 19:04:06.887: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-zmlw: error trying to reach service: No agent available END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 19:04:06.887 (646ms) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 19:04:06.887 (646ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 19:04:06.887 STEP: Destroying namespace "reboot-2882" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 19:04:06.887 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 19:04:06.93 (43ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 19:04:06.931 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 19:04:06.931 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sswitching\soff\sthe\snetwork\sinterface\sand\sensure\sthey\sfunction\supon\sswitch\son$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 19:01:56.301from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 18:59:52.497 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 18:59:52.497 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 18:59:52.497 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 18:59:52.497 Jan 29 18:59:52.497: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 18:59:52.499 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 18:59:53.041 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 18:59:53.215 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 18:59:53.37 (873ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 18:59:53.37 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 18:59:53.37 (0s) > Enter [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/29/23 18:59:53.37 Jan 29 18:59:53.639: INFO: Getting bootstrap-e2e-minion-group-6j12 Jan 29 18:59:53.639: INFO: Getting bootstrap-e2e-minion-group-zmlw Jan 29 18:59:53.640: INFO: Getting bootstrap-e2e-minion-group-kbdq Jan 29 18:59:53.685: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-zmlw condition Ready to be true Jan 29 18:59:53.685: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-kbdq condition Ready to be true Jan 29 18:59:53.685: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-6j12 condition Ready to be true Jan 29 18:59:53.730: INFO: Node bootstrap-e2e-minion-group-6j12 has 3 assigned pods with no liveness probes: [metadata-proxy-v0.1-69vb9 kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12] Jan 29 18:59:53.730: INFO: Node bootstrap-e2e-minion-group-kbdq has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for 3 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-69vb9 kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12] Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 18:59:53.730: INFO: Node bootstrap-e2e-minion-group-zmlw has 3 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-zmlw metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0] Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for 3 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-zmlw metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0] Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-6j12" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-sxj7d" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-69vb9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-sqslx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-zmlw" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-k4wx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-kbdq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:59:53.778: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kbdq": Phase="Running", Reason="", readiness=true. Elapsed: 47.856447ms Jan 29 18:59:53.778: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kbdq" satisfied condition "running and ready, or succeeded" Jan 29 18:59:53.781: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 50.757274ms Jan 29 18:59:53.781: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 18:59:53.782: INFO: Pod "metadata-proxy-v0.1-k4wx2": Phase="Running", Reason="", readiness=true. Elapsed: 52.104762ms Jan 29 18:59:53.782: INFO: Pod "metadata-proxy-v0.1-k4wx2" satisfied condition "running and ready, or succeeded" Jan 29 18:59:53.783: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6j12": Phase="Running", Reason="", readiness=true. Elapsed: 53.111705ms Jan 29 18:59:53.783: INFO: Pod "metadata-proxy-v0.1-69vb9": Phase="Running", Reason="", readiness=true. Elapsed: 53.092771ms Jan 29 18:59:53.783: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6j12" satisfied condition "running and ready, or succeeded" Jan 29 18:59:53.783: INFO: Pod "metadata-proxy-v0.1-69vb9" satisfied condition "running and ready, or succeeded" Jan 29 18:59:53.783: INFO: Pod "metadata-proxy-v0.1-sxj7d": Phase="Running", Reason="", readiness=true. Elapsed: 53.194138ms Jan 29 18:59:53.783: INFO: Pod "metadata-proxy-v0.1-sxj7d" satisfied condition "running and ready, or succeeded" Jan 29 18:59:53.783: INFO: Pod "kube-dns-autoscaler-5f6455f985-sqslx": Phase="Running", Reason="", readiness=true. Elapsed: 53.146461ms Jan 29 18:59:53.783: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 18:59:53.783: INFO: Getting external IP address for bootstrap-e2e-minion-group-kbdq Jan 29 18:59:53.783: INFO: Pod "kube-dns-autoscaler-5f6455f985-sqslx" satisfied condition "running and ready, or succeeded" Jan 29 18:59:53.783: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-kbdq(34.168.183.142:22) Jan 29 18:59:53.783: INFO: Wanted all 3 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-69vb9 kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12] Jan 29 18:59:53.783: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=true. Elapsed: 53.111804ms Jan 29 18:59:53.783: INFO: Getting external IP address for bootstrap-e2e-minion-group-6j12 Jan 29 18:59:53.783: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw" satisfied condition "running and ready, or succeeded" Jan 29 18:59:53.783: INFO: Wanted all 3 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-zmlw metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0] Jan 29 18:59:53.783: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-6j12(34.82.40.177:22) Jan 29 18:59:53.783: INFO: Getting external IP address for bootstrap-e2e-minion-group-zmlw Jan 29 18:59:53.783: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-zmlw(35.185.251.137:22) Jan 29 18:59:54.301: INFO: ssh prow@35.185.251.137:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 29 18:59:54.301: INFO: ssh prow@35.185.251.137:22: stdout: "" Jan 29 18:59:54.301: INFO: ssh prow@35.185.251.137:22: stderr: "" Jan 29 18:59:54.301: INFO: ssh prow@35.185.251.137:22: exit code: 0 Jan 29 18:59:54.301: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-zmlw condition Ready to be false Jan 29 18:59:54.306: INFO: ssh prow@34.82.40.177:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 29 18:59:54.306: INFO: ssh prow@34.82.40.177:22: stdout: "" Jan 29 18:59:54.306: INFO: ssh prow@34.82.40.177:22: stderr: "" Jan 29 18:59:54.306: INFO: ssh prow@34.82.40.177:22: exit code: 0 Jan 29 18:59:54.306: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-6j12 condition Ready to be false Jan 29 18:59:54.306: INFO: ssh prow@34.168.183.142:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 29 18:59:54.306: INFO: ssh prow@34.168.183.142:22: stdout: "" Jan 29 18:59:54.306: INFO: ssh prow@34.168.183.142:22: stderr: "" Jan 29 18:59:54.306: INFO: ssh prow@34.168.183.142:22: exit code: 0 Jan 29 18:59:54.306: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-kbdq condition Ready to be false Jan 29 18:59:54.343: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:59:54.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:59:54.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:59:56.386: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:59:56.391: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:59:56.391: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:59:58.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:59:58.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:59:58.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:00.472: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:00.480: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:00.487: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:02.516: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:02.525: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:02.529: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:04.560: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:04.567: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:04.572: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:06.602: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:06.610: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:06.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:08.646: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:08.653: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:08.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:10.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:10.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:10.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:12.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:12.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:12.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:14.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:14.784: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:14.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:16.826: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:16.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:16.829: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:18.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:18.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:18.872: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:46.768: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:46.768: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:46.769: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:48.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:48.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:48.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:50.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:50.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:50.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:52.905: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:52.905: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:52.905: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:54.967: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:54.967: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:54.967: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:57.015: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:57.015: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:57.015: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:59.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:59.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:59.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:01.108: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:01.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:01.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:03.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:03.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:03.154: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:05.196: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:05.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:05.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:07.238: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:07.242: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:07.242: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:09.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:09.285: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:09.286: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:11.327: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:11.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:11.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:13.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:13.372: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:13.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:15.412: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:15.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:15.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:17.490: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:17.490: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:17.491: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:19.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:19.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:19.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:21.578: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:21.579: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:21.579: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:23.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:23.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:23.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:25.667: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:25.667: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:25.667: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:27.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:27.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:27.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:29.756: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:29.756: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:29.756: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:31.800: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:31.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:31.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:33.845: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:33.845: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:33.845: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:35.889: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:35.890: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:35.890: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:37.935: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:37.935: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:37.935: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:39.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:39.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:39.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:42.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:42.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:42.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:44.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:44.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:44.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:46.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:46.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:46.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:48.161: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:48.161: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:48.161: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:50.206: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:50.206: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:50.206: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:52.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:52.252: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:52.252: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:54.299: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:54.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:54.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:56.300: INFO: Node bootstrap-e2e-minion-group-6j12 didn't reach desired Ready condition status (false) within 2m0s Jan 29 19:01:56.300: INFO: Node bootstrap-e2e-minion-group-kbdq didn't reach desired Ready condition status (false) within 2m0s Jan 29 19:01:56.300: INFO: Node bootstrap-e2e-minion-group-zmlw didn't reach desired Ready condition status (false) within 2m0s Jan 29 19:01:56.300: INFO: Node bootstrap-e2e-minion-group-6j12 failed reboot test. Jan 29 19:01:56.300: INFO: Node bootstrap-e2e-minion-group-kbdq failed reboot test. Jan 29 19:01:56.300: INFO: Node bootstrap-e2e-minion-group-zmlw failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 19:01:56.301 < Exit [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/29/23 19:01:56.301 (2m2.93s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 19:01:56.301 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 19:01:56.301 Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-vf6r6 to bootstrap-e2e-minion-group-6j12 Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.433613505s (1.433635054s including waiting) Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container coredns Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container coredns Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container coredns Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Readiness probe failed: Get "http://10.64.2.5:8181/ready": dial tcp 10.64.2.5:8181: connect: connection refused Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-xqdgk: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-xqdgk to bootstrap-e2e-minion-group-kbdq Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 990.09151ms (990.109933ms including waiting) Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container coredns Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container coredns Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-vf6r6 Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-xqdgk Jan 29 19:01:56.354: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 19:01:56.354: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 19:01:56.354: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 19:01:56.354: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 19:01:56.354: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 19:01:56.354: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 19:01:56.354: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 19:01:56.354: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 19:01:56.354: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 19:01:56.354: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 19:01:56.354: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 19:01:56.354: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3683c became leader Jan 29 19:01:56.354: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_67c56 became leader Jan 29 19:01:56.354: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_df769 became leader Jan 29 19:01:56.354: INFO: event for konnectivity-agent-2vqtg: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-2vqtg to bootstrap-e2e-minion-group-6j12 Jan 29 19:01:56.354: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:01:56.354: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 954.093152ms (954.103201ms including waiting) Jan 29 19:01:56.354: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container konnectivity-agent Jan 29 19:01:56.354: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container konnectivity-agent Jan 29 19:01:56.354: INFO: event for konnectivity-agent-86td2: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-86td2 to bootstrap-e2e-minion-group-zmlw Jan 29 19:01:56.354: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:01:56.354: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 898.19014ms (898.205304ms including waiting) Jan 29 19:01:56.354: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container konnectivity-agent Jan 29 19:01:56.354: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container konnectivity-agent Jan 29 19:01:56.354: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container konnectivity-agent Jan 29 19:01:56.354: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:01:56.354: INFO: event for konnectivity-agent-sl29q: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-sl29q to bootstrap-e2e-minion-group-kbdq Jan 29 19:01:56.354: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:01:56.354: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 634.905196ms (634.917128ms including waiting) Jan 29 19:01:56.354: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container konnectivity-agent Jan 29 19:01:56.354: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container konnectivity-agent Jan 29 19:01:56.354: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-2vqtg Jan 29 19:01:56.354: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-86td2 Jan 29 19:01:56.354: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-sl29q Jan 29 19:01:56.354: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 19:01:56.354: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:01:56.354: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 19:01:56.354: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 19:01:56.354: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 19:01:56.354: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_950167c8-36b9-42df-8a85-3a9d28c53b4d became leader Jan 29 19:01:56.354: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8362f635-12b0-418d-8264-942880514a9e became leader Jan 29 19:01:56.354: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_1c918cd0-bdd9-4406-82a9-d0c9fd5f6aa2 became leader Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-sqslx to bootstrap-e2e-minion-group-6j12 Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.376080383s (1.376088044s including waiting) Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container autoscaler Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container autoscaler Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-sqslx Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6j12_kube-system(4b09de720b01bf61ad28571efe2a195a) Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-zmlw_kube-system(f79ee35ecf1fb040fbeb5b8a84a1dcae) Jan 29 19:01:56.354: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:01:56.354: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 19:01:56.354: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 19:01:56.354: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 29 19:01:56.354: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 19:01:56.354: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 19:01:56.354: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_fc11fe53-5cf0-4193-a2bb-e6c9362442ab became leader Jan 29 19:01:56.354: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_01571e77-c85b-4452-a422-92094f674352 became leader Jan 29 19:01:56.354: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_3563e5ef-6b74-4b5f-aaae-be9535c8b370 became leader Jan 29 19:01:56.354: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:01:56.354: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:01:56.354: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-ch8vf to bootstrap-e2e-minion-group-zmlw Jan 29 19:01:56.354: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 19:01:56.354: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 524.239661ms (524.253716ms including waiting) Jan 29 19:01:56.354: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container default-http-backend Jan 29 19:01:56.354: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container default-http-backend Jan 29 19:01:56.354: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-ch8vf Jan 29 19:01:56.354: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 19:01:56.354: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 19:01:56.354: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 19:01:56.354: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 19:01:56.354: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-69vb9 to bootstrap-e2e-minion-group-6j12 Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 819.373409ms (819.391143ms including waiting) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metadata-proxy Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metadata-proxy Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.793896744s (1.793906041s including waiting) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container prometheus-to-sd-exporter Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container prometheus-to-sd-exporter Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bsd85 to bootstrap-e2e-master Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 733.670101ms (733.681792ms including waiting) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.802128586s (1.802140747s including waiting) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-k4wx2 to bootstrap-e2e-minion-group-zmlw Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 714.06682ms (714.080021ms including waiting) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container metadata-proxy Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container metadata-proxy Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.785588602s (1.785596591s including waiting) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container prometheus-to-sd-exporter Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container prometheus-to-sd-exporter Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-sxj7d to bootstrap-e2e-minion-group-kbdq Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 714.105616ms (714.11794ms including waiting) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metadata-proxy Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metadata-proxy Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.882455818s (1.882464632s including waiting) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container prometheus-to-sd-exporter Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container prometheus-to-sd-exporter Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bsd85 Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-k4wx2 Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-69vb9 Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-sxj7d Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-57s7b to bootstrap-e2e-minion-group-6j12 Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.867162036s (1.867179734s including waiting) Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metrics-server Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metrics-server Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.143065018s (1.143075491s including waiting) Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metrics-server-nanny Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metrics-server-nanny Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container metrics-server Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container metrics-server-nanny Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-57s7b Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-57s7b Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-rbv42 to bootstrap-e2e-minion-group-kbdq Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.329430274s (1.329453807s including waiting) Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 999.838364ms (999.850042ms including waiting) Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server-nanny Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server-nanny Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.4:10250/readyz": dial tcp 10.64.3.4:10250: connect: connection refused Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "https://10.64.3.4:10250/livez": dial tcp 10.64.3.4:10250: connect: connection refused Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "https://10.64.3.4:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container metrics-server Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container metrics-server-nanny Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Container metrics-server failed liveness probe, will be restarted Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Failed: Error: failed to get sandbox container task: no running task found: task 9b8fcc9e9e402a3c97e0f4aec77203618c2c01ccfd4d4d09a7ae88ba7b697e9a not found: not found Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-rbv42 Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-zmlw Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 1.399668429s (1.399675942s including waiting) Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container volume-snapshot-controller Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container volume-snapshot-controller Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container volume-snapshot-controller Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(998e9588-4f8a-4c36-bffc-169b133e589e) Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 19:01:56.354 (53ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 19:01:56.354 Jan 29 19:01:56.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 19:01:56.398 (44ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 19:01:56.399 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 19:01:56.399 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 19:01:56.399 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 19:01:56.399 STEP: Collecting events from namespace "reboot-3555". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 19:01:56.399 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 19:01:56.44 Jan 29 19:01:56.481: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 19:01:56.481: INFO: Jan 29 19:01:56.525: INFO: Logging node info for node bootstrap-e2e-master Jan 29 19:01:56.567: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 6d594531-bf60-4169-a952-1435da6f1f19 582 0 2023-01-29 18:58:01 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 18:58:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:21 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:21 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:21 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:58:21 +0000 UTC,LastTransitionTime:2023-01-29 18:58:02 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.227.160.185,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:715ad78430040f7d6ba514abe5aaad49,SystemUUID:715ad784-3004-0f7d-6ba5-14abe5aaad49,BootID:68c04943-fcd4-4db6-91f3-becf325d9eb5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:01:56.568: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 19:01:56.611: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 19:01:56.654: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Jan 29 19:01:56.654: INFO: Logging node info for node bootstrap-e2e-minion-group-6j12 Jan 29 19:01:56.697: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6j12 ab88abcc-a824-4e7b-91d9-e5b55ca7b07b 662 0 2023-01-29 18:58:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6j12 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-01-29 18:58:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 18:58:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-01-29 18:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 18:58:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-minion-group-6j12,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 18:58:09 +0000 UTC,LastTransitionTime:2023-01-29 18:58:08 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 18:58:09 +0000 UTC,LastTransitionTime:2023-01-29 18:58:08 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 18:58:09 +0000 UTC,LastTransitionTime:2023-01-29 18:58:08 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 18:58:09 +0000 UTC,LastTransitionTime:2023-01-29 18:58:08 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 18:58:09 +0000 UTC,LastTransitionTime:2023-01-29 18:58:08 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 18:58:09 +0000 UTC,LastTransitionTime:2023-01-29 18:58:08 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 18:58:09 +0000 UTC,LastTransitionTime:2023-01-29 18:58:08 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:35 +0000 UTC,LastTransitionTime:2023-01-29 18:58:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:35 +0000 UTC,LastTransitionTime:2023-01-29 18:58:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:35 +0000 UTC,LastTransitionTime:2023-01-29 18:58:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:58:35 +0000 UTC,LastTransitionTime:2023-01-29 18:58:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.40.177,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6j12.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6j12.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:533e501db67cda40a67ec8f66182930e,SystemUUID:533e501d-b67c-da40-a67e-c8f66182930e,BootID:7cadddf6-de30-4659-92da-b0bad3394bd4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:01:56.697: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6j12 Jan 29 19:01:56.740: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6j12 Jan 29 19:01:56.783: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6j12: error trying to reach service: No agent available Jan 29 19:01:56.783: INFO: Logging node info for node bootstrap-e2e-minion-group-kbdq Jan 29 19:01:56.826: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-kbdq c88d547b-ac1b-48a3-9f38-f761a4792a9d 692 0 2023-01-29 18:58:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-kbdq kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 18:58:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 18:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-minion-group-kbdq,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 18:58:11 +0000 UTC,LastTransitionTime:2023-01-29 18:58:10 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 18:58:11 +0000 UTC,LastTransitionTime:2023-01-29 18:58:10 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 18:58:11 +0000 UTC,LastTransitionTime:2023-01-29 18:58:10 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 18:58:11 +0000 UTC,LastTransitionTime:2023-01-29 18:58:10 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 18:58:11 +0000 UTC,LastTransitionTime:2023-01-29 18:58:10 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 18:58:11 +0000 UTC,LastTransitionTime:2023-01-29 18:58:10 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 18:58:11 +0000 UTC,LastTransitionTime:2023-01-29 18:58:10 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:59:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:59:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:59:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:59:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.183.142,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-kbdq.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-kbdq.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f85a1ba151054485449fa0d667f3e53e,SystemUUID:f85a1ba1-5105-4485-449f-a0d667f3e53e,BootID:7f2871d2-e9bf-4efa-98a3-73903aa33d68,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:01:56.826: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-kbdq Jan 29 19:01:56.870: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-kbdq Jan 29 19:01:56.915: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-kbdq: error trying to reach service: No agent available Jan 29 19:01:56.915: INFO: Logging node info for node bootstrap-e2e-minion-group-zmlw Jan 29 19:01:56.957: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-zmlw e228bd00-93a0-454f-b62d-2a81447198ac 657 0 2023-01-29 18:58:02 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-zmlw kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 18:58:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 18:58:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-minion-group-zmlw,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 18:58:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:06 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 18:58:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:06 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 18:58:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:06 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 18:58:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:06 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 18:58:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:06 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 18:58:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:06 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 18:58:07 +0000 UTC,LastTransitionTime:2023-01-29 18:58:06 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:33 +0000 UTC,LastTransitionTime:2023-01-29 18:58:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:33 +0000 UTC,LastTransitionTime:2023-01-29 18:58:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:33 +0000 UTC,LastTransitionTime:2023-01-29 18:58:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:58:33 +0000 UTC,LastTransitionTime:2023-01-29 18:58:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.185.251.137,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-zmlw.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-zmlw.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:edebb6d4adaefd8f58c1a37613cc5a13,SystemUUID:edebb6d4-adae-fd8f-58c1-a37613cc5a13,BootID:edd86694-6db8-41e1-b532-ff776863141f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:01:56.958: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-zmlw Jan 29 19:01:57.001: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-zmlw Jan 29 19:01:57.045: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-zmlw: error trying to reach service: No agent available END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 19:01:57.045 (646ms) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 19:01:57.045 (646ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 19:01:57.045 STEP: Destroying namespace "reboot-3555" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 19:01:57.045 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 19:01:57.089 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 19:01:57.089 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 19:01:57.09 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sswitching\soff\sthe\snetwork\sinterface\sand\sensure\sthey\sfunction\supon\sswitch\son$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 19:01:56.301
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 18:59:52.497 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 18:59:52.497 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 18:59:52.497 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 18:59:52.497 Jan 29 18:59:52.497: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 18:59:52.499 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 18:59:53.041 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 18:59:53.215 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 18:59:53.37 (873ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 18:59:53.37 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 18:59:53.37 (0s) > Enter [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/29/23 18:59:53.37 Jan 29 18:59:53.639: INFO: Getting bootstrap-e2e-minion-group-6j12 Jan 29 18:59:53.639: INFO: Getting bootstrap-e2e-minion-group-zmlw Jan 29 18:59:53.640: INFO: Getting bootstrap-e2e-minion-group-kbdq Jan 29 18:59:53.685: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-zmlw condition Ready to be true Jan 29 18:59:53.685: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-kbdq condition Ready to be true Jan 29 18:59:53.685: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-6j12 condition Ready to be true Jan 29 18:59:53.730: INFO: Node bootstrap-e2e-minion-group-6j12 has 3 assigned pods with no liveness probes: [metadata-proxy-v0.1-69vb9 kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12] Jan 29 18:59:53.730: INFO: Node bootstrap-e2e-minion-group-kbdq has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for 3 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-69vb9 kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12] Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 18:59:53.730: INFO: Node bootstrap-e2e-minion-group-zmlw has 3 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-zmlw metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0] Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for 3 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-zmlw metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0] Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-6j12" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-sxj7d" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-69vb9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-sqslx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-zmlw" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-k4wx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:59:53.730: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-kbdq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:59:53.778: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kbdq": Phase="Running", Reason="", readiness=true. Elapsed: 47.856447ms Jan 29 18:59:53.778: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kbdq" satisfied condition "running and ready, or succeeded" Jan 29 18:59:53.781: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 50.757274ms Jan 29 18:59:53.781: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 18:59:53.782: INFO: Pod "metadata-proxy-v0.1-k4wx2": Phase="Running", Reason="", readiness=true. Elapsed: 52.104762ms Jan 29 18:59:53.782: INFO: Pod "metadata-proxy-v0.1-k4wx2" satisfied condition "running and ready, or succeeded" Jan 29 18:59:53.783: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6j12": Phase="Running", Reason="", readiness=true. Elapsed: 53.111705ms Jan 29 18:59:53.783: INFO: Pod "metadata-proxy-v0.1-69vb9": Phase="Running", Reason="", readiness=true. Elapsed: 53.092771ms Jan 29 18:59:53.783: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6j12" satisfied condition "running and ready, or succeeded" Jan 29 18:59:53.783: INFO: Pod "metadata-proxy-v0.1-69vb9" satisfied condition "running and ready, or succeeded" Jan 29 18:59:53.783: INFO: Pod "metadata-proxy-v0.1-sxj7d": Phase="Running", Reason="", readiness=true. Elapsed: 53.194138ms Jan 29 18:59:53.783: INFO: Pod "metadata-proxy-v0.1-sxj7d" satisfied condition "running and ready, or succeeded" Jan 29 18:59:53.783: INFO: Pod "kube-dns-autoscaler-5f6455f985-sqslx": Phase="Running", Reason="", readiness=true. Elapsed: 53.146461ms Jan 29 18:59:53.783: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-kbdq metadata-proxy-v0.1-sxj7d] Jan 29 18:59:53.783: INFO: Getting external IP address for bootstrap-e2e-minion-group-kbdq Jan 29 18:59:53.783: INFO: Pod "kube-dns-autoscaler-5f6455f985-sqslx" satisfied condition "running and ready, or succeeded" Jan 29 18:59:53.783: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-kbdq(34.168.183.142:22) Jan 29 18:59:53.783: INFO: Wanted all 3 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-69vb9 kube-dns-autoscaler-5f6455f985-sqslx kube-proxy-bootstrap-e2e-minion-group-6j12] Jan 29 18:59:53.783: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw": Phase="Running", Reason="", readiness=true. Elapsed: 53.111804ms Jan 29 18:59:53.783: INFO: Getting external IP address for bootstrap-e2e-minion-group-6j12 Jan 29 18:59:53.783: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-zmlw" satisfied condition "running and ready, or succeeded" Jan 29 18:59:53.783: INFO: Wanted all 3 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-zmlw metadata-proxy-v0.1-k4wx2 volume-snapshot-controller-0] Jan 29 18:59:53.783: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-6j12(34.82.40.177:22) Jan 29 18:59:53.783: INFO: Getting external IP address for bootstrap-e2e-minion-group-zmlw Jan 29 18:59:53.783: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-zmlw(35.185.251.137:22) Jan 29 18:59:54.301: INFO: ssh prow@35.185.251.137:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 29 18:59:54.301: INFO: ssh prow@35.185.251.137:22: stdout: "" Jan 29 18:59:54.301: INFO: ssh prow@35.185.251.137:22: stderr: "" Jan 29 18:59:54.301: INFO: ssh prow@35.185.251.137:22: exit code: 0 Jan 29 18:59:54.301: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-zmlw condition Ready to be false Jan 29 18:59:54.306: INFO: ssh prow@34.82.40.177:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 29 18:59:54.306: INFO: ssh prow@34.82.40.177:22: stdout: "" Jan 29 18:59:54.306: INFO: ssh prow@34.82.40.177:22: stderr: "" Jan 29 18:59:54.306: INFO: ssh prow@34.82.40.177:22: exit code: 0 Jan 29 18:59:54.306: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-6j12 condition Ready to be false Jan 29 18:59:54.306: INFO: ssh prow@34.168.183.142:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 29 18:59:54.306: INFO: ssh prow@34.168.183.142:22: stdout: "" Jan 29 18:59:54.306: INFO: ssh prow@34.168.183.142:22: stderr: "" Jan 29 18:59:54.306: INFO: ssh prow@34.168.183.142:22: exit code: 0 Jan 29 18:59:54.306: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-kbdq condition Ready to be false Jan 29 18:59:54.343: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:59:54.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:59:54.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:59:56.386: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:59:56.391: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:59:56.391: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:59:58.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:59:58.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:59:58.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:00.472: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:00.480: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:00.487: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:02.516: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:02.525: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:02.529: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:04.560: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:04.567: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:04.572: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:06.602: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:06.610: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:06.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:08.646: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:08.653: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:08.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:10.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:10.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:10.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:12.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:12.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:12.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:14.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:14.784: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:14.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:16.826: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:16.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:16.829: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:18.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:18.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:18.872: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:46.768: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:46.768: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:46.769: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:48.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:48.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:48.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:50.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:50.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:50.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:52.905: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:52.905: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:52.905: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:54.967: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:54.967: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:54.967: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:57.015: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:57.015: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:57.015: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:59.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:59.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:00:59.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:01.108: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:01.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:01.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:03.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:03.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:03.154: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:05.196: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:05.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:05.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:07.238: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:07.242: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:07.242: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:09.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:09.285: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:09.286: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:11.327: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:11.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:11.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:13.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:13.372: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:13.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:15.412: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:15.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:15.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:17.490: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:17.490: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:17.491: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:19.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:19.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:19.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:21.578: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:21.579: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:21.579: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:23.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:23.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:23.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:25.667: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:25.667: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:25.667: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:27.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:27.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:27.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:29.756: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:29.756: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:29.756: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:31.800: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:31.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:31.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:33.845: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:33.845: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:33.845: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:35.889: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:35.890: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:35.890: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:37.935: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:37.935: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:37.935: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:39.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:39.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:39.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:42.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:42.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:42.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:44.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:44.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:44.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:46.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:46.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:46.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:48.161: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:48.161: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:48.161: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:50.206: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:50.206: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:50.206: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:52.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:52.252: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:52.252: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:54.299: INFO: Condition Ready of node bootstrap-e2e-minion-group-6j12 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:54.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-zmlw is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:54.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-kbdq is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 19:01:56.300: INFO: Node bootstrap-e2e-minion-group-6j12 didn't reach desired Ready condition status (false) within 2m0s Jan 29 19:01:56.300: INFO: Node bootstrap-e2e-minion-group-kbdq didn't reach desired Ready condition status (false) within 2m0s Jan 29 19:01:56.300: INFO: Node bootstrap-e2e-minion-group-zmlw didn't reach desired Ready condition status (false) within 2m0s Jan 29 19:01:56.300: INFO: Node bootstrap-e2e-minion-group-6j12 failed reboot test. Jan 29 19:01:56.300: INFO: Node bootstrap-e2e-minion-group-kbdq failed reboot test. Jan 29 19:01:56.300: INFO: Node bootstrap-e2e-minion-group-zmlw failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 19:01:56.301 < Exit [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/29/23 19:01:56.301 (2m2.93s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 19:01:56.301 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 19:01:56.301 Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-vf6r6 to bootstrap-e2e-minion-group-6j12 Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.433613505s (1.433635054s including waiting) Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container coredns Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container coredns Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container coredns Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Readiness probe failed: Get "http://10.64.2.5:8181/ready": dial tcp 10.64.2.5:8181: connect: connection refused Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-vf6r6: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-xqdgk: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-xqdgk to bootstrap-e2e-minion-group-kbdq Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 990.09151ms (990.109933ms including waiting) Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container coredns Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container coredns Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f-xqdgk: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-vf6r6 Jan 29 19:01:56.354: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-xqdgk Jan 29 19:01:56.354: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 19:01:56.354: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 19:01:56.354: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 19:01:56.354: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 19:01:56.354: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 19:01:56.354: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 19:01:56.354: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 19:01:56.354: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 19:01:56.354: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 19:01:56.354: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 19:01:56.354: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 19:01:56.354: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3683c became leader Jan 29 19:01:56.354: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_67c56 became leader Jan 29 19:01:56.354: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_df769 became leader Jan 29 19:01:56.354: INFO: event for konnectivity-agent-2vqtg: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-2vqtg to bootstrap-e2e-minion-group-6j12 Jan 29 19:01:56.354: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:01:56.354: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 954.093152ms (954.103201ms including waiting) Jan 29 19:01:56.354: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container konnectivity-agent Jan 29 19:01:56.354: INFO: event for konnectivity-agent-2vqtg: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container konnectivity-agent Jan 29 19:01:56.354: INFO: event for konnectivity-agent-86td2: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-86td2 to bootstrap-e2e-minion-group-zmlw Jan 29 19:01:56.354: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:01:56.354: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 898.19014ms (898.205304ms including waiting) Jan 29 19:01:56.354: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container konnectivity-agent Jan 29 19:01:56.354: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container konnectivity-agent Jan 29 19:01:56.354: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container konnectivity-agent Jan 29 19:01:56.354: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for konnectivity-agent-86td2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 19:01:56.354: INFO: event for konnectivity-agent-sl29q: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-sl29q to bootstrap-e2e-minion-group-kbdq Jan 29 19:01:56.354: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 19:01:56.354: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 634.905196ms (634.917128ms including waiting) Jan 29 19:01:56.354: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container konnectivity-agent Jan 29 19:01:56.354: INFO: event for konnectivity-agent-sl29q: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container konnectivity-agent Jan 29 19:01:56.354: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-2vqtg Jan 29 19:01:56.354: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-86td2 Jan 29 19:01:56.354: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-sl29q Jan 29 19:01:56.354: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 19:01:56.354: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:01:56.354: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 19:01:56.354: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 19:01:56.354: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 19:01:56.354: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_950167c8-36b9-42df-8a85-3a9d28c53b4d became leader Jan 29 19:01:56.354: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8362f635-12b0-418d-8264-942880514a9e became leader Jan 29 19:01:56.354: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_1c918cd0-bdd9-4406-82a9-d0c9fd5f6aa2 became leader Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-sqslx to bootstrap-e2e-minion-group-6j12 Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.376080383s (1.376088044s including waiting) Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container autoscaler Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985-sqslx: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container autoscaler Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-sqslx Jan 29 19:01:56.354: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6j12: {kubelet bootstrap-e2e-minion-group-6j12} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6j12_kube-system(4b09de720b01bf61ad28571efe2a195a) Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kbdq: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container kube-proxy Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for kube-proxy-bootstrap-e2e-minion-group-zmlw: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-zmlw_kube-system(f79ee35ecf1fb040fbeb5b8a84a1dcae) Jan 29 19:01:56.354: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 19:01:56.354: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 19:01:56.354: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 19:01:56.354: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 29 19:01:56.354: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 19:01:56.354: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 19:01:56.354: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_fc11fe53-5cf0-4193-a2bb-e6c9362442ab became leader Jan 29 19:01:56.354: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_01571e77-c85b-4452-a422-92094f674352 became leader Jan 29 19:01:56.354: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_3563e5ef-6b74-4b5f-aaae-be9535c8b370 became leader Jan 29 19:01:56.354: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:01:56.354: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:01:56.354: INFO: event for l7-default-backend-8549d69d99-ch8vf: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-ch8vf to bootstrap-e2e-minion-group-zmlw Jan 29 19:01:56.354: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 19:01:56.354: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 524.239661ms (524.253716ms including waiting) Jan 29 19:01:56.354: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container default-http-backend Jan 29 19:01:56.354: INFO: event for l7-default-backend-8549d69d99-ch8vf: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container default-http-backend Jan 29 19:01:56.354: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-ch8vf Jan 29 19:01:56.354: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 19:01:56.354: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 19:01:56.354: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 19:01:56.354: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 19:01:56.354: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-69vb9 to bootstrap-e2e-minion-group-6j12 Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 819.373409ms (819.391143ms including waiting) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metadata-proxy Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metadata-proxy Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.793896744s (1.793906041s including waiting) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container prometheus-to-sd-exporter Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-69vb9: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container prometheus-to-sd-exporter Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bsd85 to bootstrap-e2e-master Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 733.670101ms (733.681792ms including waiting) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.802128586s (1.802140747s including waiting) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-bsd85: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-k4wx2 to bootstrap-e2e-minion-group-zmlw Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 714.06682ms (714.080021ms including waiting) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container metadata-proxy Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container metadata-proxy Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.785588602s (1.785596591s including waiting) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container prometheus-to-sd-exporter Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-k4wx2: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container prometheus-to-sd-exporter Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-sxj7d to bootstrap-e2e-minion-group-kbdq Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 714.105616ms (714.11794ms including waiting) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metadata-proxy Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metadata-proxy Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.882455818s (1.882464632s including waiting) Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container prometheus-to-sd-exporter Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1-sxj7d: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container prometheus-to-sd-exporter Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bsd85 Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-k4wx2 Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-69vb9 Jan 29 19:01:56.354: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-sxj7d Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-57s7b to bootstrap-e2e-minion-group-6j12 Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.867162036s (1.867179734s including waiting) Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metrics-server Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metrics-server Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.143065018s (1.143075491s including waiting) Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Created: Created container metrics-server-nanny Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Started: Started container metrics-server-nanny Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container metrics-server Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c-57s7b: {kubelet bootstrap-e2e-minion-group-6j12} Killing: Stopping container metrics-server-nanny Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-57s7b Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-57s7b Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-rbv42 to bootstrap-e2e-minion-group-kbdq Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.329430274s (1.329453807s including waiting) Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 999.838364ms (999.850042ms including waiting) Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Created: Created container metrics-server-nanny Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Started: Started container metrics-server-nanny Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.4:10250/readyz": dial tcp 10.64.3.4:10250: connect: connection refused Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "https://10.64.3.4:10250/livez": dial tcp 10.64.3.4:10250: connect: connection refused Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Readiness probe failed: Get "https://10.64.3.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Unhealthy: Liveness probe failed: Get "https://10.64.3.4:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container metrics-server Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Stopping container metrics-server-nanny Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Killing: Container metrics-server failed liveness probe, will be restarted Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} Failed: Error: failed to get sandbox container task: no running task found: task 9b8fcc9e9e402a3c97e0f4aec77203618c2c01ccfd4d4d09a7ae88ba7b697e9a not found: not found Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9-rbv42: {kubelet bootstrap-e2e-minion-group-kbdq} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-rbv42 Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 19:01:56.354: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-zmlw Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 1.399668429s (1.399675942s including waiting) Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Created: Created container volume-snapshot-controller Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Started: Started container volume-snapshot-controller Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Killing: Stopping container volume-snapshot-controller Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-zmlw} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(998e9588-4f8a-4c36-bffc-169b133e589e) Jan 29 19:01:56.354: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 19:01:56.354 (53ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 19:01:56.354 Jan 29 19:01:56.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 19:01:56.398 (44ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 19:01:56.399 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 19:01:56.399 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 19:01:56.399 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 19:01:56.399 STEP: Collecting events from namespace "reboot-3555". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 19:01:56.399 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 19:01:56.44 Jan 29 19:01:56.481: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 19:01:56.481: INFO: Jan 29 19:01:56.525: INFO: Logging node info for node bootstrap-e2e-master Jan 29 19:01:56.567: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 6d594531-bf60-4169-a952-1435da6f1f19 582 0 2023-01-29 18:58:01 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 18:58:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 18:58:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 18:58:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-06/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 18:58:18 +0000 UTC,LastTransitionTime:2023-01-29 18:58:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:21 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:21 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:58:21 +0000 UTC,LastTransitionTime:2023-01-29 18:58:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:58:21 +0000 UTC,LastTransitionTime:2023-01-29 18:58:02 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.227.160.185,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-06.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-06.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:715ad78430040f7d6ba514abe5aaad49,SystemUUID:715ad784-3004-0f7d-6ba5-14abe5aaad49,BootID:68c04943-fcd4-4db6-91f3-becf325d9eb5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 19:01:56.568: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 19:01:56.611: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 19:01:56.654: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Jan 29 19:01:56.654: INFO: Logging node info for node bootstrap-e2e-minion-group-6j12 Jan 29 19:01:56.697: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6j12 ab88abcc-a824-4e7b-91d9-e5b55ca7b07b 662 0 2023-01-29 18:58:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6j12 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-01-29 18:58:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 18:58:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach