go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 01:33:25.325from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 01:25:40.384 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 01:25:40.384 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 01:25:40.384 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/30/23 01:25:40.385 Jan 30 01:25:40.385: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/30/23 01:25:40.386 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/30/23 01:27:27.285 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/30/23 01:27:27.386 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 01:27:27.524 (1m47.14s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 01:27:27.524 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 01:27:27.524 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/30/23 01:27:27.524 Jan 30 01:27:27.624: INFO: Getting bootstrap-e2e-minion-group-bt6j Jan 30 01:27:27.624: INFO: Getting bootstrap-e2e-minion-group-dx3p Jan 30 01:27:27.624: INFO: Getting bootstrap-e2e-minion-group-hkv2 Jan 30 01:27:27.716: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-bt6j condition Ready to be true Jan 30 01:27:27.716: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-dx3p condition Ready to be true Jan 30 01:27:27.716: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-hkv2 condition Ready to be true Jan 30 01:27:27.763: INFO: Node bootstrap-e2e-minion-group-bt6j has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-bt6j metadata-proxy-v0.1-mrhx2] Jan 30 01:27:27.763: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-bt6j metadata-proxy-v0.1-mrhx2] Jan 30 01:27:27.763: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mrhx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:27:27.763: INFO: Node bootstrap-e2e-minion-group-dx3p has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-x6fsx kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0] Jan 30 01:27:27.763: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-x6fsx kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0] Jan 30 01:27:27.763: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:27:27.763: INFO: Node bootstrap-e2e-minion-group-hkv2 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:27:27.763: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:27:27.763: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-jc4vr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:27:27.764: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:27:27.764: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-x6fsx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:27:27.764: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-dx3p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:27:27.764: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-6t4zl" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:27:27.764: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hkv2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:27:27.835: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 71.500182ms Jan 30 01:27:27.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:27.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=true. Elapsed: 71.401441ms Jan 30 01:27:27.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx" satisfied condition "running and ready, or succeeded" Jan 30 01:27:27.837: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j": Phase="Running", Reason="", readiness=true. Elapsed: 73.744155ms Jan 30 01:27:27.837: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" satisfied condition "running and ready, or succeeded" Jan 30 01:27:27.837: INFO: Pod "metadata-proxy-v0.1-jc4vr": Phase="Running", Reason="", readiness=true. Elapsed: 73.941785ms Jan 30 01:27:27.837: INFO: Pod "metadata-proxy-v0.1-jc4vr" satisfied condition "running and ready, or succeeded" Jan 30 01:27:27.837: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=true. Elapsed: 73.232794ms Jan 30 01:27:27.837: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2" satisfied condition "running and ready, or succeeded" Jan 30 01:27:27.837: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:27:27.838: INFO: Getting external IP address for bootstrap-e2e-minion-group-hkv2 Jan 30 01:27:27.838: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-hkv2(34.82.9.96:22) Jan 30 01:27:27.838: INFO: Pod "metadata-proxy-v0.1-mrhx2": Phase="Running", Reason="", readiness=true. Elapsed: 74.907769ms Jan 30 01:27:27.838: INFO: Pod "metadata-proxy-v0.1-mrhx2" satisfied condition "running and ready, or succeeded" Jan 30 01:27:27.838: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-bt6j metadata-proxy-v0.1-mrhx2] Jan 30 01:27:27.838: INFO: Getting external IP address for bootstrap-e2e-minion-group-bt6j Jan 30 01:27:27.838: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-bt6j(35.197.46.206:22) Jan 30 01:27:27.838: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dx3p": Phase="Running", Reason="", readiness=true. Elapsed: 74.514893ms Jan 30 01:27:27.838: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dx3p" satisfied condition "running and ready, or succeeded" Jan 30 01:27:27.838: INFO: Pod "metadata-proxy-v0.1-6t4zl": Phase="Running", Reason="", readiness=true. Elapsed: 74.406239ms Jan 30 01:27:27.838: INFO: Pod "metadata-proxy-v0.1-6t4zl" satisfied condition "running and ready, or succeeded" Jan 30 01:27:28.380: INFO: ssh prow@34.82.9.96:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 30 01:27:28.380: INFO: ssh prow@34.82.9.96:22: stdout: "" Jan 30 01:27:28.380: INFO: ssh prow@34.82.9.96:22: stderr: "" Jan 30 01:27:28.380: INFO: ssh prow@34.82.9.96:22: exit code: 0 Jan 30 01:27:28.380: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-hkv2 condition Ready to be false Jan 30 01:27:28.391: INFO: ssh prow@35.197.46.206:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 30 01:27:28.391: INFO: ssh prow@35.197.46.206:22: stdout: "" Jan 30 01:27:28.391: INFO: ssh prow@35.197.46.206:22: stderr: "" Jan 30 01:27:28.391: INFO: ssh prow@35.197.46.206:22: exit code: 0 Jan 30 01:27:28.391: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-bt6j condition Ready to be false Jan 30 01:27:28.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:28.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:29.882: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.118510522s Jan 30 01:27:29.882: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:30.528: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:30.528: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:31.878: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.114853973s Jan 30 01:27:31.878: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:32.573: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:32.573: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:33.878: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.114501311s Jan 30 01:27:33.878: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:34.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:34.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:35.904: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.140292826s Jan 30 01:27:35.904: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:36.661: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:36.661: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:37.878: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.114260987s Jan 30 01:27:37.878: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:38.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:38.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:39.879: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.115709249s Jan 30 01:27:39.879: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:40.751: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:40.751: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:41.878: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.114448829s Jan 30 01:27:41.878: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:42.796: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:42.796: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:43.877: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.11333217s Jan 30 01:27:43.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:44.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:44.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:45.877: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.113773412s Jan 30 01:27:45.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:46.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:46.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:47.877: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.114090703s Jan 30 01:27:47.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:48.930: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:48.930: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:49.880: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.116212315s Jan 30 01:27:49.880: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 30 01:27:49.880: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-x6fsx kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0] Jan 30 01:27:49.880: INFO: Getting external IP address for bootstrap-e2e-minion-group-dx3p Jan 30 01:27:49.880: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-dx3p(34.145.43.138:22) Jan 30 01:27:50.410: INFO: ssh prow@34.145.43.138:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 30 01:27:50.410: INFO: ssh prow@34.145.43.138:22: stdout: "" Jan 30 01:27:50.410: INFO: ssh prow@34.145.43.138:22: stderr: "" Jan 30 01:27:50.410: INFO: ssh prow@34.145.43.138:22: exit code: 0 Jan 30 01:27:50.410: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-dx3p condition Ready to be false Jan 30 01:27:50.468: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:50.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:50.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:52.512: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:53.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:53.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:54.555: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:55.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:55.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:56.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:57.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:57.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:58.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:59.152: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:59.152: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:00.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:01.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:01.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:02.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:03.247: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:03.247: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:04.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:05.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:05.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:06.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:07.336: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:07.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:08.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:09.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:09.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:10.909: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:11.428: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:11.428: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:12.953: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:13.472: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:13.472: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:14.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:15.517: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:15.517: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:17.039: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:17.561: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:17.562: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:19.082: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:19.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:19.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:21.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:21.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:21.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:23.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:23.694: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-bt6j condition Ready to be true Jan 30 01:28:23.694: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-hkv2 condition Ready to be true Jan 30 01:28:23.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:23.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:25.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:25.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:25.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:27.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:27.826: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:27.826: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:28:29.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:29.871: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:29.871: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:28:31.344: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:31.916: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:31.916: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:28:33.387: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:33.960: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:28:33.960: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:35.433: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:36.004: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:28:36.004: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:37.473: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:38.044: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:38.044: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:39.513: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:40.084: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:40.084: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:41.554: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:42.124: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:42.125: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:43.594: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:44.164: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:44.164: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:45.634: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:46.204: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:46.204: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:47.674: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:48.245: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:48.245: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:49.714: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:50.285: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:50.285: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:51.754: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:52.325: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:52.325: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:53.794: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:54.365: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:54.365: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:55.835: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:56.406: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:56.406: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:57.875: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:58.446: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:58.446: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:59.915: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:29:00.487: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:29:00.487: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:29:01.955: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:29:02.526: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:29:02.526: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:29:03.995: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:29:04.566: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:29:04.566: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:29:06.036: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:29:06.606: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:29:06.606: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:29:08.076: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:29:14.335: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:14.335: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:14.335: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:16.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:16.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:16.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:18.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:18.430: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:18.430: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:20.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:20.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:20.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:22.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:22.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:22.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:24.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:24.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:24.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:26.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:26.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:26.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:28.664: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:28.665: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:28.665: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:30.713: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:30.714: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:30.714: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:32.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:32.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:32.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:34.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:34.808: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:34.808: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:36.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:36.853: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:36.853: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:38.892: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:38.898: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:38.898: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:40.937: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:40.945: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:29:40.945: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-jc4vr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:29:40.945: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:40.945: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hkv2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:29:40.988: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=true. Elapsed: 43.200546ms Jan 30 01:29:40.988: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2" satisfied condition "running and ready, or succeeded" Jan 30 01:29:40.988: INFO: Pod "metadata-proxy-v0.1-jc4vr": Phase="Running", Reason="", readiness=true. Elapsed: 43.507526ms Jan 30 01:29:40.988: INFO: Pod "metadata-proxy-v0.1-jc4vr" satisfied condition "running and ready, or succeeded" Jan 30 01:29:40.988: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:29:40.988: INFO: Reboot successful on node bootstrap-e2e-minion-group-hkv2 Jan 30 01:29:42.981: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:42.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:45.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:45.032: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:47.094: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:47.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:49.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:49.144: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:51.145: INFO: Node bootstrap-e2e-minion-group-dx3p didn't reach desired Ready condition status (false) within 2m0s Jan 30 01:29:51.183: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:53.228: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:55.270: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:57.318: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:59.364: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:01.408: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:03.452: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:05.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:07.541: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:09.584: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:11.628: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:13.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:15.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:17.759: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:19.802: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:21.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:23.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:25.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:27.976: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:30.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:32.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:34.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:36.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:38.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:40.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:42.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:44.327: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:46.371: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:48.418: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:50.462: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:52.505: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:54.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:56.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:58.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:00.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:02.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:04.769: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:06.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:08.855: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:10.899: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:12.951: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:14.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:17.039: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:19.082: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:21.130: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:23.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:25.217: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:27.261: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:29.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:31.350: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:33.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:35.439: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:37.483: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:39.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:41.574: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:43.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:45.661: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:47.706: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:49.749: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:51.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:53.836: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:55.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:57.923: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:59.967: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:02.010: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:04.053: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:06.098: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:08.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:10.186: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:12.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:14.274: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:16.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:18.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:20.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:22.448: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:24.491: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:26.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards (Spec Runtime: 6m47.14s) test/e2e/cloud/gcp/reboot.go:136 In [It] (Node Runtime: 5m0s) test/e2e/cloud/gcp/reboot.go:136 Spec Goroutine goroutine 6597 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc00163ac18?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f2f94e7e118?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f2f94e7e118?, 0xc005068840}, {0x8147128?, 0xc00348a1a0}, {0xc000320820, 0x182}, 0xc00502df50) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.7({0x7f2f94e7e118, 0xc005068840}) test/e2e/cloud/gcp/reboot.go:141 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111f08?, 0xc005068840}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6599 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f2f94e7e118, 0xc005068840}, {0x8147128, 0xc00348a1a0}, {0xc002cc8800, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f2f94e7e118, 0xc005068840}, {0x8147128, 0xc00348a1a0}, {0x7fffb179d5f8, 0x3}, {0xc002cc8800, 0x1f}, {0xc000320820, 0x182}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 30 01:32:28.579: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:30.622: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:32.668: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:34.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:36.754: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:38.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:40.856: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:42.900: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:44.944: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:46.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards (Spec Runtime: 7m7.141s) test/e2e/cloud/gcp/reboot.go:136 In [It] (Node Runtime: 5m20.002s) test/e2e/cloud/gcp/reboot.go:136 Spec Goroutine goroutine 6597 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc00163ac18?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f2f94e7e118?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f2f94e7e118?, 0xc005068840}, {0x8147128?, 0xc00348a1a0}, {0xc000320820, 0x182}, 0xc00502df50) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.7({0x7f2f94e7e118, 0xc005068840}) test/e2e/cloud/gcp/reboot.go:141 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111f08?, 0xc005068840}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6599 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f2f94e7e118, 0xc005068840}, {0x8147128, 0xc00348a1a0}, {0xc002cc8800, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f2f94e7e118, 0xc005068840}, {0x8147128, 0xc00348a1a0}, {0x7fffb179d5f8, 0x3}, {0xc002cc8800, 0x1f}, {0xc000320820, 0x182}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 30 01:32:49.032: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:51.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:53.120: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:55.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:57.207: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:59.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:01.294: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:03.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:05.381: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:07.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards (Spec Runtime: 7m27.143s) test/e2e/cloud/gcp/reboot.go:136 In [It] (Node Runtime: 5m40.004s) test/e2e/cloud/gcp/reboot.go:136 Spec Goroutine goroutine 6597 [semacquire, 7 minutes] sync.runtime_Semacquire(0xc00163ac18?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f2f94e7e118?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f2f94e7e118?, 0xc005068840}, {0x8147128?, 0xc00348a1a0}, {0xc000320820, 0x182}, 0xc00502df50) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.7({0x7f2f94e7e118, 0xc005068840}) test/e2e/cloud/gcp/reboot.go:141 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111f08?, 0xc005068840}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6599 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f2f94e7e118, 0xc005068840}, {0x8147128, 0xc00348a1a0}, {0xc002cc8800, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f2f94e7e118, 0xc005068840}, {0x8147128, 0xc00348a1a0}, {0x7fffb179d5f8, 0x3}, {0xc002cc8800, 0x1f}, {0xc000320820, 0x182}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 30 01:33:09.473: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:11.516: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:13.561: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:15.603: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:17.647: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:19.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:21.734: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:23.735: INFO: Node bootstrap-e2e-minion-group-bt6j didn't reach desired Ready condition status (true) within 5m0s Jan 30 01:33:23.735: INFO: Node bootstrap-e2e-minion-group-bt6j failed reboot test. Jan 30 01:33:23.735: INFO: Node bootstrap-e2e-minion-group-dx3p failed reboot test. Jan 30 01:33:23.735: INFO: Executing termination hook on nodes Jan 30 01:33:23.735: INFO: Getting external IP address for bootstrap-e2e-minion-group-bt6j Jan 30 01:33:23.735: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-bt6j(35.197.46.206:22) Jan 30 01:33:24.260: INFO: ssh prow@35.197.46.206:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 01:33:24.260: INFO: ssh prow@35.197.46.206:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 01:27:38 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 01:33:24.260: INFO: ssh prow@35.197.46.206:22: stderr: "" Jan 30 01:33:24.260: INFO: ssh prow@35.197.46.206:22: exit code: 0 Jan 30 01:33:24.260: INFO: Getting external IP address for bootstrap-e2e-minion-group-dx3p Jan 30 01:33:24.260: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-dx3p(34.145.43.138:22) Jan 30 01:33:24.799: INFO: ssh prow@34.145.43.138:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 01:33:24.799: INFO: ssh prow@34.145.43.138:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 01:28:00 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 01:33:24.799: INFO: ssh prow@34.145.43.138:22: stderr: "" Jan 30 01:33:24.799: INFO: ssh prow@34.145.43.138:22: exit code: 0 Jan 30 01:33:24.799: INFO: Getting external IP address for bootstrap-e2e-minion-group-hkv2 Jan 30 01:33:24.799: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-hkv2(34.82.9.96:22) Jan 30 01:33:25.325: INFO: ssh prow@34.82.9.96:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 01:33:25.325: INFO: ssh prow@34.82.9.96:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 01:27:38 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 01:33:25.325: INFO: ssh prow@34.82.9.96:22: stderr: "" Jan 30 01:33:25.325: INFO: ssh prow@34.82.9.96:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 01:33:25.325 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/30/23 01:33:25.325 (5m57.801s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 01:33:25.325 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/30/23 01:33:25.325 Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-ftgx9 to bootstrap-e2e-minion-group-hkv2 Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.002631704s (1.002651112s including waiting) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": dial tcp 10.64.2.3:8181: connect: connection refused Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Readiness probe failed: Get "http://10.64.2.5:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Liveness probe failed: Get "http://10.64.2.5:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Readiness probe failed: Get "http://10.64.2.10:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Liveness probe failed: Get "http://10.64.2.10:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Container coredns failed liveness probe, will be restarted Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Readiness probe failed: Get "http://10.64.2.10:8181/ready": dial tcp 10.64.2.10:8181: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-wfgss to bootstrap-e2e-minion-group-dx3p Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.312534938s (1.312544829s including waiting) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Readiness probe failed: Get "http://10.64.3.19:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "http://10.64.3.19:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-wfgss_kube-system(fd7e5efb-e6c8-4618-8180-372906aca7b7) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-wfgss Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Readiness probe failed: Get "http://10.64.3.28:8181/ready": dial tcp 10.64.3.28:8181: connect: connection refused Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-wfgss_kube-system(fd7e5efb-e6c8-4618-8180-372906aca7b7) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-wfgss Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-ftgx9 Jan 30 01:33:25.383: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 30 01:33:25.383: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 30 01:33:25.383: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 01:33:25.383: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 01:33:25.383: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 01:33:25.383: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 30 01:33:25.383: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 01:33:25.383: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 30 01:33:25.383: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_4cdf3 became leader Jan 30 01:33:25.383: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ac338 became leader Jan 30 01:33:25.383: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ce918 became leader Jan 30 01:33:25.383: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_95a0e became leader Jan 30 01:33:25.383: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b698e became leader Jan 30 01:33:25.383: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_c9f6e became leader Jan 30 01:33:25.383: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_def12 became leader Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-8dmqc to bootstrap-e2e-minion-group-dx3p Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.553029522s (1.553048635s including waiting) Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "http://10.64.3.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Failed: Error: failed to get sandbox container task: no running task found: task 86cfa70222386362fc21e6e023af3c49885ce70bddff79db189147c2227c0263 not found: not found Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-8dmqc_kube-system(a86afb6b-ee26-4ee2-9404-ff14a1aeed70) Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "http://10.64.3.22:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-8dmqc_kube-system(a86afb6b-ee26-4ee2-9404-ff14a1aeed70) Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-8dmqc_kube-system(a86afb6b-ee26-4ee2-9404-ff14a1aeed70) Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "http://10.64.3.50:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 01:33:25.383: INFO: event for konnectivity-agent-9j2sg: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-9j2sg to bootstrap-e2e-minion-group-bt6j Jan 30 01:33:25.383: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 01:33:25.383: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 656.707165ms (656.714689ms including waiting) Jan 30 01:33:25.383: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9j2sg_kube-system(5f7283c4-d762-4a76-9256-c7f2436df7b8) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Failed: Error: failed to get sandbox container task: no running task found: task 9a140d170ea34dd325d74d04502b642d66d48dc918c508b31dfb8ef904c34432 not found: not found Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9j2sg_kube-system(5f7283c4-d762-4a76-9256-c7f2436df7b8) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Liveness probe failed: Get "http://10.64.0.16:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-9psf2 to bootstrap-e2e-minion-group-hkv2 Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 657.037846ms (657.0539ms including waiting) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Liveness probe failed: Get "http://10.64.2.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Failed: Error: failed to get sandbox container task: no running task found: task 407f4fd26023877d10eebda20a4d5c9df500dcd16aae590846edc1a34c8af1f5 not found: not found Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9psf2_kube-system(67a256ba-75bf-455f-b0c8-cf102cff2423) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Liveness probe failed: Get "http://10.64.2.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9psf2_kube-system(67a256ba-75bf-455f-b0c8-cf102cff2423) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Liveness probe failed: Get "http://10.64.2.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-8dmqc Jan 30 01:33:25.384: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-9j2sg Jan 30 01:33:25.384: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-9psf2 Jan 30 01:33:25.384: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 30 01:33:25.384: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 30 01:33:25.384: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 30 01:33:25.384: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 30 01:33:25.384: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 30 01:33:25.384: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 30 01:33:25.384: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 30 01:33:25.384: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 30 01:33:25.384: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 30 01:33:25.384: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 30 01:33:25.384: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 01:33:25.384: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 30 01:33:25.384: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 30 01:33:25.384: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 30 01:33:25.384: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 30 01:33:25.384: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(548d8a4d412ea624192633f425ca8149) Jan 30 01:33:25.384: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_be709c60-2a3f-4849-ab64-ecee40b17104 became leader Jan 30 01:33:25.384: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_0d612d5f-2e3f-4a93-a500-bf3745a493f8 became leader Jan 30 01:33:25.384: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_de6ee059-96ba-4c3d-bfde-95cf9e7419b1 became leader Jan 30 01:33:25.384: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_981f6359-bd6e-4ea5-8b97-1399424ecde9 became leader Jan 30 01:33:25.384: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_e45780c5-7c56-43d2-bee3-3b5de7a3ce4e became leader Jan 30 01:33:25.384: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_38a2ca4d-d586-469a-9141-6e91cfbf3c0e became leader Jan 30 01:33:25.384: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_68e8b07d-1e8d-44d3-a16a-ac4fb54b83fd became leader Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-x6fsx to bootstrap-e2e-minion-group-dx3p Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 2.673140332s (2.673153883s including waiting) Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-x6fsx_kube-system(316ca4a7-6c99-481e-a0ff-1766a6a888be) Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-x6fsx Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-x6fsx_kube-system(316ca4a7-6c99-481e-a0ff-1766a6a888be) Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-x6fsx_kube-system(316ca4a7-6c99-481e-a0ff-1766a6a888be) Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-x6fsx Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-bt6j_kube-system(6671c8c6e4e16a3c254833ebe19049da) Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-bt6j_kube-system(6671c8c6e4e16a3c254833ebe19049da) Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-bt6j_kube-system(6671c8c6e4e16a3c254833ebe19049da) Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-dx3p_kube-system(cdadd6623acbd4ce0baf8d2112f24c5c) Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-dx3p_kube-system(cdadd6623acbd4ce0baf8d2112f24c5c) Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hkv2_kube-system(9c65fc331fb8e465e8ca146aedb85821) Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hkv2_kube-system(9c65fc331fb8e465e8ca146aedb85821) Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hkv2_kube-system(9c65fc331fb8e465e8ca146aedb85821) Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 30 01:33:25.384: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 30 01:33:25.384: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(ecb5a5dcd22e71f77775e7d311196ff2) Jan 30 01:33:25.384: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 30 01:33:25.384: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_46660e88-d124-4363-8950-417bf47fc5ec became leader Jan 30 01:33:25.384: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_4963297b-5fc0-4e05-bc1c-8c1650a00819 became leader Jan 30 01:33:25.384: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_afbdfa26-5274-47a0-9831-809769f20f6c became leader Jan 30 01:33:25.384: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ab8bcd4c-da94-4a00-bdb9-4647d5d24710 became leader Jan 30 01:33:25.384: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_f794b2d6-a10a-4d64-bc1d-5b73f901cfdf became leader Jan 30 01:33:25.384: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_158f4c8c-6cc2-471c-a7ac-c6d9ae954f8e became leader Jan 30 01:33:25.384: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_f6cd8b96-575a-4c71-a4dc-e242e928b304 became leader Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-9cjjm to bootstrap-e2e-minion-group-dx3p Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 591.563236ms (591.574604ms including waiting) Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container default-http-backend Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container default-http-backend Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-9cjjm Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container default-http-backend Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container default-http-backend Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container default-http-backend Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container default-http-backend Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "http://10.64.3.39:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-9cjjm Jan 30 01:33:25.384: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 30 01:33:25.384: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 30 01:33:25.384: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 30 01:33:25.384: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 30 01:33:25.384: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 30 01:33:25.384: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://10.138.0.2:8086/healthz": dial tcp 10.138.0.2:8086: connect: connection refused Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-6t4zl to bootstrap-e2e-minion-group-dx3p Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 879.557706ms (879.569889ms including waiting) Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.047520415s (2.047529815s including waiting) Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-jc4vr to bootstrap-e2e-minion-group-hkv2 Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 780.744165ms (780.763471ms including waiting) Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.829700722s (1.829716599s including waiting) Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-mrhx2 to bootstrap-e2e-minion-group-bt6j Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 714.937195ms (714.950733ms including waiting) Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.784882746s (1.784891187s including waiting) Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-qndlb to bootstrap-e2e-master Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 800.897673ms (800.905188ms including waiting) Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.422135272s (2.422143281s including waiting) Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-mrhx2 Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-qndlb Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-jc4vr Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-6t4zl Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-6btrg to bootstrap-e2e-minion-group-dx3p Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.86733657s (2.867346928s including waiting) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.963636058s (3.963650732s including waiting) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Readiness probe failed: Get "https://10.64.3.9:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "https://10.64.3.9:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-6btrg Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-6btrg Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-jpr66 to bootstrap-e2e-minion-group-bt6j Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.300742334s (1.300752166s including waiting) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 960.128176ms (960.148415ms including waiting) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": dial tcp 10.64.0.3:10250: connect: connection refused Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": dial tcp 10.64.0.3:10250: connect: connection refused Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Liveness probe failed: HTTP probe failed with statuscode: 500 Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.10:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Liveness probe failed: Get "https://10.64.0.10:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Container metrics-server failed liveness probe, will be restarted Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.10:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.10:10250/readyz": dial tcp 10.64.0.10:10250: connect: connection refused Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-jpr66 Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.13:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Liveness probe failed: Get "https://10.64.0.13:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.13:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-jpr66_kube-system(0b345cb2-f3c4-4728-8749-e13c49e0d5b6) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-jpr66_kube-system(0b345cb2-f3c4-4728-8749-e13c49e0d5b6) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-jpr66 Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-dx3p Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.442535419s (2.442561914s including waiting) Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container volume-snapshot-controller Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container volume-snapshot-controller Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container volume-snapshot-controller Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(7029d163-353e-4569-b724-268397d21301) Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container volume-snapshot-controller Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container volume-snapshot-controller Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container volume-snapshot-controller Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(7029d163-353e-4569-b724-268397d21301) Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container volume-snapshot-controller Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container volume-snapshot-controller Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container volume-snapshot-controller Jan 30 01:33:25.385: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(7029d163-353e-4569-b724-268397d21301) Jan 30 01:33:25.385: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 01:33:25.385 (59ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 01:33:25.385 Jan 30 01:33:25.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 30 01:33:25.431: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:27.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:29.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:31.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:33.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:35.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:37.481: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:39.502: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:41.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:43.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:45.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:47.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:49.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:51.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:53.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:55.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:57.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:59.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:01.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:03.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:05.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:07.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:09.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:11.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:13.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:15.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:17.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:19.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:21.482: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:23.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 01:34:25.477 (1m0.093s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 01:34:25.477 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 01:34:25.477 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 01:34:25.477 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 01:34:25.477 STEP: Collecting events from namespace "reboot-3498". - test/e2e/framework/debug/dump.go:42 @ 01/30/23 01:34:25.478 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/30/23 01:34:25.518 Jan 30 01:34:25.560: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 01:34:25.560: INFO: Jan 30 01:34:25.605: INFO: Logging node info for node bootstrap-e2e-master Jan 30 01:34:25.647: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 7a2bd2df-fc42-4d55-8404-5b2a0412e072 2976 0 2023-01-30 01:04:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 01:04:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-30 01:04:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-30 01:04:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 01:31:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-upgrade/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 01:04:39 +0000 UTC,LastTransitionTime:2023-01-30 01:04:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 01:31:19 +0000 UTC,LastTransitionTime:2023-01-30 01:04:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 01:31:19 +0000 UTC,LastTransitionTime:2023-01-30 01:04:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 01:31:19 +0000 UTC,LastTransitionTime:2023-01-30 01:04:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 01:31:19 +0000 UTC,LastTransitionTime:2023-01-30 01:04:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.82.184.40,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gce-upgrade.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gce-upgrade.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5736e6f149167618f71cd530dafef4cc,SystemUUID:5736e6f1-4916-7618-f71c-d530dafef4cc,BootID:fe689329-330a-4af4-8223-73b99031148e,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,KubeProxyVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:135961043,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:125279031,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:57551672,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 01:34:25.647: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 30 01:34:25.694: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 30 01:34:25.756: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:25.757: INFO: Container etcd-container ready: true, restart count 0 Jan 30 01:34:25.757: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:25.757: INFO: Container kube-apiserver ready: true, restart count 1 Jan 30 01:34:25.757: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:25.757: INFO: Container kube-controller-manager ready: true, restart count 8 Jan 30 01:34:25.757: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-30 01:03:55 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:25.757: INFO: Container kube-addon-manager ready: true, restart count 3 Jan 30 01:34:25.757: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:25.757: INFO: Container etcd-container ready: true, restart count 5 Jan 30 01:34:25.757: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:25.757: INFO: Container konnectivity-server-container ready: true, restart count 2 Jan 30 01:34:25.757: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:25.757: INFO: Container kube-scheduler ready: true, restart count 7 Jan 30 01:34:25.757: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-30 01:03:55 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:25.757: INFO: Container l7-lb-controller ready: true, restart count 9 Jan 30 01:34:25.757: INFO: metadata-proxy-v0.1-qndlb started at 2023-01-30 01:04:22 +0000 UTC (0+2 container statuses recorded) Jan 30 01:34:25.757: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 01:34:25.757: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 01:34:25.950: INFO: Latency metrics for node bootstrap-e2e-master Jan 30 01:34:25.950: INFO: Logging node info for node bootstrap-e2e-minion-group-bt6j Jan 30 01:34:25.993: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-bt6j efad890a-089b-40bf-b3d0-1106dec194f4 3167 0 2023-01-30 01:04:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-bt6j kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 01:04:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 01:28:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-30 01:29:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-30 01:29:43 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {kube-controller-manager Update v1 2023-01-30 01:34:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} }]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-upgrade/us-west1-b/bootstrap-e2e-minion-group-bt6j,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 01:04:39 +0000 UTC,LastTransitionTime:2023-01-30 01:04:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:29:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:29:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:29:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:29:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.46.206,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-bt6j.c.k8s-jkns-gce-upgrade.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-bt6j.c.k8s-jkns-gce-upgrade.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f7ade72fddb4327ba8b5c5a9c07f04c,SystemUUID:3f7ade72-fddb-4327-ba8b-5c5a9c07f04c,BootID:e145d8d8-8bdd-40a3-b85d-a02004edfa80,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,KubeProxyVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 01:34:25.993: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-bt6j Jan 30 01:34:26.040: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-bt6j Jan 30 01:34:26.107: INFO: kube-proxy-bootstrap-e2e-minion-group-bt6j started at 2023-01-30 01:04:20 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.107: INFO: Container kube-proxy ready: true, restart count 7 Jan 30 01:34:26.107: INFO: metadata-proxy-v0.1-mrhx2 started at 2023-01-30 01:04:21 +0000 UTC (0+2 container statuses recorded) Jan 30 01:34:26.107: INFO: Container metadata-proxy ready: true, restart count 2 Jan 30 01:34:26.107: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 30 01:34:26.107: INFO: konnectivity-agent-9j2sg started at 2023-01-30 01:04:40 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.107: INFO: Container konnectivity-agent ready: true, restart count 9 Jan 30 01:34:26.107: INFO: metrics-server-v0.5.2-867b8754b9-jpr66 started at 2023-01-30 01:05:48 +0000 UTC (0+2 container statuses recorded) Jan 30 01:34:26.107: INFO: Container metrics-server ready: false, restart count 11 Jan 30 01:34:26.107: INFO: Container metrics-server-nanny ready: false, restart count 9 Jan 30 01:34:26.274: INFO: Latency metrics for node bootstrap-e2e-minion-group-bt6j Jan 30 01:34:26.274: INFO: Logging node info for node bootstrap-e2e-minion-group-dx3p Jan 30 01:34:26.317: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-dx3p 97ee0a06-78c0-423b-b6ac-5763006307f0 2975 0 2023-01-30 01:04:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-dx3p kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 01:04:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 01:13:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 01:21:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-30 01:30:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-30 01:31:17 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-upgrade/us-west1-b/bootstrap-e2e-minion-group-dx3p,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 01:04:39 +0000 UTC,LastTransitionTime:2023-01-30 01:04:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 01:31:17 +0000 UTC,LastTransitionTime:2023-01-30 01:14:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 01:31:17 +0000 UTC,LastTransitionTime:2023-01-30 01:14:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 01:31:17 +0000 UTC,LastTransitionTime:2023-01-30 01:14:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 01:31:17 +0000 UTC,LastTransitionTime:2023-01-30 01:21:07 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.145.43.138,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-dx3p.c.k8s-jkns-gce-upgrade.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-dx3p.c.k8s-jkns-gce-upgrade.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:04cfc971fb8b0e96ce2e62a783445108,SystemUUID:04cfc971-fb8b-0e96-ce2e-62a783445108,BootID:9b9d29fb-6452-40ca-80e4-4ded665f8322,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,KubeProxyVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 01:34:26.318: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-dx3p Jan 30 01:34:26.366: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-dx3p Jan 30 01:34:26.433: INFO: kube-proxy-bootstrap-e2e-minion-group-dx3p started at 2023-01-30 01:04:27 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.433: INFO: Container kube-proxy ready: true, restart count 8 Jan 30 01:34:26.433: INFO: l7-default-backend-8549d69d99-9cjjm started at 2023-01-30 01:04:39 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.433: INFO: Container default-http-backend ready: true, restart count 4 Jan 30 01:34:26.433: INFO: volume-snapshot-controller-0 started at 2023-01-30 01:04:39 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.433: INFO: Container volume-snapshot-controller ready: false, restart count 15 Jan 30 01:34:26.433: INFO: coredns-6846b5b5f-wfgss started at 2023-01-30 01:04:39 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.433: INFO: Container coredns ready: false, restart count 6 Jan 30 01:34:26.433: INFO: kube-dns-autoscaler-5f6455f985-x6fsx started at 2023-01-30 01:04:39 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.433: INFO: Container autoscaler ready: true, restart count 10 Jan 30 01:34:26.433: INFO: metadata-proxy-v0.1-6t4zl started at 2023-01-30 01:04:28 +0000 UTC (0+2 container statuses recorded) Jan 30 01:34:26.433: INFO: Container metadata-proxy ready: true, restart count 2 Jan 30 01:34:26.433: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 30 01:34:26.433: INFO: konnectivity-agent-8dmqc started at 2023-01-30 01:04:39 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.433: INFO: Container konnectivity-agent ready: false, restart count 9 Jan 30 01:34:26.600: INFO: Latency metrics for node bootstrap-e2e-minion-group-dx3p Jan 30 01:34:26.600: INFO: Logging node info for node bootstrap-e2e-minion-group-hkv2 Jan 30 01:34:26.643: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-hkv2 e09d3248-9b99-4af7-a475-ce2f98c7c753 3158 0 2023-01-30 01:04:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-hkv2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 01:04:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 01:28:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-30 01:28:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 01:29:40 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-30 01:29:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-upgrade/us-west1-b/bootstrap-e2e-minion-group-hkv2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:18:11 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:18:11 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:18:11 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:18:11 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:18:11 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:18:11 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:18:11 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 01:04:39 +0000 UTC,LastTransitionTime:2023-01-30 01:04:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 01:29:40 +0000 UTC,LastTransitionTime:2023-01-30 01:29:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 01:29:40 +0000 UTC,LastTransitionTime:2023-01-30 01:29:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 01:29:40 +0000 UTC,LastTransitionTime:2023-01-30 01:29:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 01:29:40 +0000 UTC,LastTransitionTime:2023-01-30 01:29:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.9.96,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-hkv2.c.k8s-jkns-gce-upgrade.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-hkv2.c.k8s-jkns-gce-upgrade.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6477a1a0d081fbd58d469fc57fe2da0f,SystemUUID:6477a1a0-d081-fbd5-8d46-9fc57fe2da0f,BootID:4fe9f4b7-cf7b-4f13-a1b8-cde7d10f2058,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,KubeProxyVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 01:34:26.643: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-hkv2 Jan 30 01:34:26.691: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-hkv2 Jan 30 01:34:26.756: INFO: kube-proxy-bootstrap-e2e-minion-group-hkv2 started at 2023-01-30 01:04:23 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.756: INFO: Container kube-proxy ready: false, restart count 12 Jan 30 01:34:26.756: INFO: metadata-proxy-v0.1-jc4vr started at 2023-01-30 01:04:24 +0000 UTC (0+2 container statuses recorded) Jan 30 01:34:26.756: INFO: Container metadata-proxy ready: true, restart count 2 Jan 30 01:34:26.756: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 30 01:34:26.756: INFO: konnectivity-agent-9psf2 started at 2023-01-30 01:04:40 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.756: INFO: Container konnectivity-agent ready: true, restart count 8 Jan 30 01:34:26.756: INFO: coredns-6846b5b5f-ftgx9 started at 2023-01-30 01:04:47 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.756: INFO: Container coredns ready: true, restart count 6 Jan 30 01:34:26.927: INFO: Latency metrics for node bootstrap-e2e-minion-group-hkv2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 01:34:26.927 (1.45s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 01:34:26.927 (1.45s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 01:34:26.927 STEP: Destroying namespace "reboot-3498" for this suite. - test/e2e/framework/framework.go:347 @ 01/30/23 01:34:26.927 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 01:34:26.971 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 01:34:26.971 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 01:34:26.971 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 01:33:25.325from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 01:25:40.384 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 01:25:40.384 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 01:25:40.384 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/30/23 01:25:40.385 Jan 30 01:25:40.385: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/30/23 01:25:40.386 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/30/23 01:27:27.285 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/30/23 01:27:27.386 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 01:27:27.524 (1m47.14s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 01:27:27.524 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 01:27:27.524 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/30/23 01:27:27.524 Jan 30 01:27:27.624: INFO: Getting bootstrap-e2e-minion-group-bt6j Jan 30 01:27:27.624: INFO: Getting bootstrap-e2e-minion-group-dx3p Jan 30 01:27:27.624: INFO: Getting bootstrap-e2e-minion-group-hkv2 Jan 30 01:27:27.716: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-bt6j condition Ready to be true Jan 30 01:27:27.716: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-dx3p condition Ready to be true Jan 30 01:27:27.716: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-hkv2 condition Ready to be true Jan 30 01:27:27.763: INFO: Node bootstrap-e2e-minion-group-bt6j has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-bt6j metadata-proxy-v0.1-mrhx2] Jan 30 01:27:27.763: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-bt6j metadata-proxy-v0.1-mrhx2] Jan 30 01:27:27.763: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mrhx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:27:27.763: INFO: Node bootstrap-e2e-minion-group-dx3p has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-x6fsx kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0] Jan 30 01:27:27.763: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-x6fsx kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0] Jan 30 01:27:27.763: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:27:27.763: INFO: Node bootstrap-e2e-minion-group-hkv2 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:27:27.763: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:27:27.763: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-jc4vr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:27:27.764: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:27:27.764: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-x6fsx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:27:27.764: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-dx3p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:27:27.764: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-6t4zl" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:27:27.764: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hkv2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:27:27.835: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 71.500182ms Jan 30 01:27:27.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:27.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=true. Elapsed: 71.401441ms Jan 30 01:27:27.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx" satisfied condition "running and ready, or succeeded" Jan 30 01:27:27.837: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j": Phase="Running", Reason="", readiness=true. Elapsed: 73.744155ms Jan 30 01:27:27.837: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" satisfied condition "running and ready, or succeeded" Jan 30 01:27:27.837: INFO: Pod "metadata-proxy-v0.1-jc4vr": Phase="Running", Reason="", readiness=true. Elapsed: 73.941785ms Jan 30 01:27:27.837: INFO: Pod "metadata-proxy-v0.1-jc4vr" satisfied condition "running and ready, or succeeded" Jan 30 01:27:27.837: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=true. Elapsed: 73.232794ms Jan 30 01:27:27.837: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2" satisfied condition "running and ready, or succeeded" Jan 30 01:27:27.837: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:27:27.838: INFO: Getting external IP address for bootstrap-e2e-minion-group-hkv2 Jan 30 01:27:27.838: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-hkv2(34.82.9.96:22) Jan 30 01:27:27.838: INFO: Pod "metadata-proxy-v0.1-mrhx2": Phase="Running", Reason="", readiness=true. Elapsed: 74.907769ms Jan 30 01:27:27.838: INFO: Pod "metadata-proxy-v0.1-mrhx2" satisfied condition "running and ready, or succeeded" Jan 30 01:27:27.838: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-bt6j metadata-proxy-v0.1-mrhx2] Jan 30 01:27:27.838: INFO: Getting external IP address for bootstrap-e2e-minion-group-bt6j Jan 30 01:27:27.838: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-bt6j(35.197.46.206:22) Jan 30 01:27:27.838: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dx3p": Phase="Running", Reason="", readiness=true. Elapsed: 74.514893ms Jan 30 01:27:27.838: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dx3p" satisfied condition "running and ready, or succeeded" Jan 30 01:27:27.838: INFO: Pod "metadata-proxy-v0.1-6t4zl": Phase="Running", Reason="", readiness=true. Elapsed: 74.406239ms Jan 30 01:27:27.838: INFO: Pod "metadata-proxy-v0.1-6t4zl" satisfied condition "running and ready, or succeeded" Jan 30 01:27:28.380: INFO: ssh prow@34.82.9.96:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 30 01:27:28.380: INFO: ssh prow@34.82.9.96:22: stdout: "" Jan 30 01:27:28.380: INFO: ssh prow@34.82.9.96:22: stderr: "" Jan 30 01:27:28.380: INFO: ssh prow@34.82.9.96:22: exit code: 0 Jan 30 01:27:28.380: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-hkv2 condition Ready to be false Jan 30 01:27:28.391: INFO: ssh prow@35.197.46.206:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 30 01:27:28.391: INFO: ssh prow@35.197.46.206:22: stdout: "" Jan 30 01:27:28.391: INFO: ssh prow@35.197.46.206:22: stderr: "" Jan 30 01:27:28.391: INFO: ssh prow@35.197.46.206:22: exit code: 0 Jan 30 01:27:28.391: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-bt6j condition Ready to be false Jan 30 01:27:28.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:28.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:29.882: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.118510522s Jan 30 01:27:29.882: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:30.528: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:30.528: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:31.878: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.114853973s Jan 30 01:27:31.878: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:32.573: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:32.573: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:33.878: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.114501311s Jan 30 01:27:33.878: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:34.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:34.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:35.904: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.140292826s Jan 30 01:27:35.904: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:36.661: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:36.661: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:37.878: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.114260987s Jan 30 01:27:37.878: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:38.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:38.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:39.879: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.115709249s Jan 30 01:27:39.879: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:40.751: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:40.751: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:41.878: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.114448829s Jan 30 01:27:41.878: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:42.796: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:42.796: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:43.877: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.11333217s Jan 30 01:27:43.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:44.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:44.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:45.877: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.113773412s Jan 30 01:27:45.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:46.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:46.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:47.877: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.114090703s Jan 30 01:27:47.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:26:27 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:27:48.930: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:48.930: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:49.880: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.116212315s Jan 30 01:27:49.880: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 30 01:27:49.880: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-x6fsx kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0] Jan 30 01:27:49.880: INFO: Getting external IP address for bootstrap-e2e-minion-group-dx3p Jan 30 01:27:49.880: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-dx3p(34.145.43.138:22) Jan 30 01:27:50.410: INFO: ssh prow@34.145.43.138:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 30 01:27:50.410: INFO: ssh prow@34.145.43.138:22: stdout: "" Jan 30 01:27:50.410: INFO: ssh prow@34.145.43.138:22: stderr: "" Jan 30 01:27:50.410: INFO: ssh prow@34.145.43.138:22: exit code: 0 Jan 30 01:27:50.410: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-dx3p condition Ready to be false Jan 30 01:27:50.468: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:50.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:50.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:52.512: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:53.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:53.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:54.555: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:55.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:55.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:56.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:57.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:57.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:58.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:59.152: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:27:59.152: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:00.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:01.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:01.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:02.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:03.247: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:03.247: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:04.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:05.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:05.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:06.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:07.336: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:07.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:08.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:09.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:09.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:10.909: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:11.428: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:11.428: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:12.953: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:13.472: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:13.472: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:14.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:15.517: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:15.517: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:17.039: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:17.561: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:17.562: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:19.082: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:19.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:19.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:21.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:21.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:21.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:23.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:23.694: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-bt6j condition Ready to be true Jan 30 01:28:23.694: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-hkv2 condition Ready to be true Jan 30 01:28:23.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:23.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:25.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:25.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:25.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:27.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:27.826: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:27.826: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:28:29.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:29.871: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:29.871: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:28:31.344: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:31.916: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:31.916: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:28:33.387: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:33.960: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:28:33.960: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:35.433: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:28:36.004: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:28:36.004: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:28:37.473: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:38.044: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:38.044: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:39.513: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:40.084: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:40.084: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:41.554: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:42.124: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:42.125: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:43.594: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:44.164: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:44.164: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:45.634: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:46.204: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:46.204: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:47.674: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:48.245: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:48.245: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:49.714: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:50.285: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:50.285: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:51.754: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:52.325: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:52.325: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:53.794: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:54.365: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:54.365: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:55.835: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:56.406: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:56.406: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:57.875: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:28:58.446: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:28:58.446: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:28:59.915: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:29:00.487: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:29:00.487: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:29:01.955: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:29:02.526: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:29:02.526: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:29:03.995: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:29:04.566: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:29:04.566: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:29:06.036: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:29:06.606: INFO: Couldn't get node bootstrap-e2e-minion-group-hkv2 Jan 30 01:29:06.606: INFO: Couldn't get node bootstrap-e2e-minion-group-bt6j Jan 30 01:29:08.076: INFO: Couldn't get node bootstrap-e2e-minion-group-dx3p Jan 30 01:29:14.335: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:14.335: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:14.335: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:16.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:16.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:16.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:18.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:18.430: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:18.430: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:20.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:20.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:20.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:22.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:22.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:22.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:24.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:24.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:24.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:26.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:26.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:26.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:28.664: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:28.665: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:28.665: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:30.713: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:30.714: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:30.714: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:32.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:32.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:32.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:34.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:34.808: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:34.808: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:36.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:36.853: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:36.853: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:38.892: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:38.898: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:29:38.898: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:40.937: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:40.945: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:29:40.945: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-jc4vr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:29:40.945: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:40.945: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hkv2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:29:40.988: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=true. Elapsed: 43.200546ms Jan 30 01:29:40.988: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2" satisfied condition "running and ready, or succeeded" Jan 30 01:29:40.988: INFO: Pod "metadata-proxy-v0.1-jc4vr": Phase="Running", Reason="", readiness=true. Elapsed: 43.507526ms Jan 30 01:29:40.988: INFO: Pod "metadata-proxy-v0.1-jc4vr" satisfied condition "running and ready, or succeeded" Jan 30 01:29:40.988: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:29:40.988: INFO: Reboot successful on node bootstrap-e2e-minion-group-hkv2 Jan 30 01:29:42.981: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:42.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:45.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:45.032: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:47.094: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:47.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:49.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:49.144: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:29:51.145: INFO: Node bootstrap-e2e-minion-group-dx3p didn't reach desired Ready condition status (false) within 2m0s Jan 30 01:29:51.183: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:53.228: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:55.270: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:57.318: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:29:59.364: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:01.408: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:03.452: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:05.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:07.541: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:09.584: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:11.628: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:13.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:15.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:17.759: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:19.802: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:21.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:23.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:25.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:27.976: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:30.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:32.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:34.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:36.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:38.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:40.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:42.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:44.327: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:46.371: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:48.418: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:50.462: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:52.505: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:54.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:56.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:30:58.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:00.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:02.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:04.769: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:06.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:08.855: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:10.899: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:12.951: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:14.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:17.039: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:19.082: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:21.130: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:23.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:25.217: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:27.261: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:29.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:31.350: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:33.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:35.439: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:37.483: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:39.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:41.574: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:43.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:45.661: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:47.706: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:49.749: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:51.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:53.836: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:55.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:57.923: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:31:59.967: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:02.010: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:04.053: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:06.098: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:08.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:10.186: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:12.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:14.274: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:16.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:18.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:20.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:22.448: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:24.491: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:26.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards (Spec Runtime: 6m47.14s) test/e2e/cloud/gcp/reboot.go:136 In [It] (Node Runtime: 5m0s) test/e2e/cloud/gcp/reboot.go:136 Spec Goroutine goroutine 6597 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc00163ac18?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f2f94e7e118?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f2f94e7e118?, 0xc005068840}, {0x8147128?, 0xc00348a1a0}, {0xc000320820, 0x182}, 0xc00502df50) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.7({0x7f2f94e7e118, 0xc005068840}) test/e2e/cloud/gcp/reboot.go:141 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111f08?, 0xc005068840}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6599 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f2f94e7e118, 0xc005068840}, {0x8147128, 0xc00348a1a0}, {0xc002cc8800, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f2f94e7e118, 0xc005068840}, {0x8147128, 0xc00348a1a0}, {0x7fffb179d5f8, 0x3}, {0xc002cc8800, 0x1f}, {0xc000320820, 0x182}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 30 01:32:28.579: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:30.622: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:32.668: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:34.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:36.754: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:38.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:40.856: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:42.900: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:44.944: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:46.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards (Spec Runtime: 7m7.141s) test/e2e/cloud/gcp/reboot.go:136 In [It] (Node Runtime: 5m20.002s) test/e2e/cloud/gcp/reboot.go:136 Spec Goroutine goroutine 6597 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc00163ac18?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f2f94e7e118?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f2f94e7e118?, 0xc005068840}, {0x8147128?, 0xc00348a1a0}, {0xc000320820, 0x182}, 0xc00502df50) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.7({0x7f2f94e7e118, 0xc005068840}) test/e2e/cloud/gcp/reboot.go:141 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111f08?, 0xc005068840}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6599 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f2f94e7e118, 0xc005068840}, {0x8147128, 0xc00348a1a0}, {0xc002cc8800, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f2f94e7e118, 0xc005068840}, {0x8147128, 0xc00348a1a0}, {0x7fffb179d5f8, 0x3}, {0xc002cc8800, 0x1f}, {0xc000320820, 0x182}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 30 01:32:49.032: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:51.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:53.120: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:55.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:57.207: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:32:59.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:01.294: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:03.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:05.381: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:07.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards (Spec Runtime: 7m27.143s) test/e2e/cloud/gcp/reboot.go:136 In [It] (Node Runtime: 5m40.004s) test/e2e/cloud/gcp/reboot.go:136 Spec Goroutine goroutine 6597 [semacquire, 7 minutes] sync.runtime_Semacquire(0xc00163ac18?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f2f94e7e118?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f2f94e7e118?, 0xc005068840}, {0x8147128?, 0xc00348a1a0}, {0xc000320820, 0x182}, 0xc00502df50) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.7({0x7f2f94e7e118, 0xc005068840}) test/e2e/cloud/gcp/reboot.go:141 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111f08?, 0xc005068840}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6599 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f2f94e7e118, 0xc005068840}, {0x8147128, 0xc00348a1a0}, {0xc002cc8800, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f2f94e7e118, 0xc005068840}, {0x8147128, 0xc00348a1a0}, {0x7fffb179d5f8, 0x3}, {0xc002cc8800, 0x1f}, {0xc000320820, 0x182}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 30 01:33:09.473: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:11.516: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:13.561: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:15.603: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:17.647: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:19.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:21.734: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:23.735: INFO: Node bootstrap-e2e-minion-group-bt6j didn't reach desired Ready condition status (true) within 5m0s Jan 30 01:33:23.735: INFO: Node bootstrap-e2e-minion-group-bt6j failed reboot test. Jan 30 01:33:23.735: INFO: Node bootstrap-e2e-minion-group-dx3p failed reboot test. Jan 30 01:33:23.735: INFO: Executing termination hook on nodes Jan 30 01:33:23.735: INFO: Getting external IP address for bootstrap-e2e-minion-group-bt6j Jan 30 01:33:23.735: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-bt6j(35.197.46.206:22) Jan 30 01:33:24.260: INFO: ssh prow@35.197.46.206:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 01:33:24.260: INFO: ssh prow@35.197.46.206:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 01:27:38 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 01:33:24.260: INFO: ssh prow@35.197.46.206:22: stderr: "" Jan 30 01:33:24.260: INFO: ssh prow@35.197.46.206:22: exit code: 0 Jan 30 01:33:24.260: INFO: Getting external IP address for bootstrap-e2e-minion-group-dx3p Jan 30 01:33:24.260: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-dx3p(34.145.43.138:22) Jan 30 01:33:24.799: INFO: ssh prow@34.145.43.138:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 01:33:24.799: INFO: ssh prow@34.145.43.138:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 01:28:00 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 01:33:24.799: INFO: ssh prow@34.145.43.138:22: stderr: "" Jan 30 01:33:24.799: INFO: ssh prow@34.145.43.138:22: exit code: 0 Jan 30 01:33:24.799: INFO: Getting external IP address for bootstrap-e2e-minion-group-hkv2 Jan 30 01:33:24.799: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-hkv2(34.82.9.96:22) Jan 30 01:33:25.325: INFO: ssh prow@34.82.9.96:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 01:33:25.325: INFO: ssh prow@34.82.9.96:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 01:27:38 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 01:33:25.325: INFO: ssh prow@34.82.9.96:22: stderr: "" Jan 30 01:33:25.325: INFO: ssh prow@34.82.9.96:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 01:33:25.325 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/30/23 01:33:25.325 (5m57.801s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 01:33:25.325 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/30/23 01:33:25.325 Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-ftgx9 to bootstrap-e2e-minion-group-hkv2 Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.002631704s (1.002651112s including waiting) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": dial tcp 10.64.2.3:8181: connect: connection refused Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Readiness probe failed: Get "http://10.64.2.5:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Liveness probe failed: Get "http://10.64.2.5:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Readiness probe failed: Get "http://10.64.2.10:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Liveness probe failed: Get "http://10.64.2.10:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Container coredns failed liveness probe, will be restarted Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Readiness probe failed: Get "http://10.64.2.10:8181/ready": dial tcp 10.64.2.10:8181: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-wfgss to bootstrap-e2e-minion-group-dx3p Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.312534938s (1.312544829s including waiting) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Readiness probe failed: Get "http://10.64.3.19:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "http://10.64.3.19:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-wfgss_kube-system(fd7e5efb-e6c8-4618-8180-372906aca7b7) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-wfgss Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container coredns Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Readiness probe failed: Get "http://10.64.3.28:8181/ready": dial tcp 10.64.3.28:8181: connect: connection refused Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-wfgss_kube-system(fd7e5efb-e6c8-4618-8180-372906aca7b7) Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-wfgss Jan 30 01:33:25.383: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-ftgx9 Jan 30 01:33:25.383: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 30 01:33:25.383: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 30 01:33:25.383: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 01:33:25.383: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 01:33:25.383: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 01:33:25.383: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 30 01:33:25.383: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 01:33:25.383: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 30 01:33:25.383: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_4cdf3 became leader Jan 30 01:33:25.383: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ac338 became leader Jan 30 01:33:25.383: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ce918 became leader Jan 30 01:33:25.383: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_95a0e became leader Jan 30 01:33:25.383: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b698e became leader Jan 30 01:33:25.383: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_c9f6e became leader Jan 30 01:33:25.383: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_def12 became leader Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-8dmqc to bootstrap-e2e-minion-group-dx3p Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.553029522s (1.553048635s including waiting) Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "http://10.64.3.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Failed: Error: failed to get sandbox container task: no running task found: task 86cfa70222386362fc21e6e023af3c49885ce70bddff79db189147c2227c0263 not found: not found Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-8dmqc_kube-system(a86afb6b-ee26-4ee2-9404-ff14a1aeed70) Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "http://10.64.3.22:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-8dmqc_kube-system(a86afb6b-ee26-4ee2-9404-ff14a1aeed70) Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-8dmqc_kube-system(a86afb6b-ee26-4ee2-9404-ff14a1aeed70) Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "http://10.64.3.50:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.383: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 01:33:25.383: INFO: event for konnectivity-agent-9j2sg: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-9j2sg to bootstrap-e2e-minion-group-bt6j Jan 30 01:33:25.383: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 01:33:25.383: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 656.707165ms (656.714689ms including waiting) Jan 30 01:33:25.383: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container konnectivity-agent Jan 30 01:33:25.383: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9j2sg_kube-system(5f7283c4-d762-4a76-9256-c7f2436df7b8) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Failed: Error: failed to get sandbox container task: no running task found: task 9a140d170ea34dd325d74d04502b642d66d48dc918c508b31dfb8ef904c34432 not found: not found Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9j2sg_kube-system(5f7283c4-d762-4a76-9256-c7f2436df7b8) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Liveness probe failed: Get "http://10.64.0.16:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9j2sg: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-9psf2 to bootstrap-e2e-minion-group-hkv2 Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 657.037846ms (657.0539ms including waiting) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Liveness probe failed: Get "http://10.64.2.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Failed: Error: failed to get sandbox container task: no running task found: task 407f4fd26023877d10eebda20a4d5c9df500dcd16aae590846edc1a34c8af1f5 not found: not found Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9psf2_kube-system(67a256ba-75bf-455f-b0c8-cf102cff2423) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Liveness probe failed: Get "http://10.64.2.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container konnectivity-agent Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9psf2_kube-system(67a256ba-75bf-455f-b0c8-cf102cff2423) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Liveness probe failed: Get "http://10.64.2.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for konnectivity-agent-9psf2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-8dmqc Jan 30 01:33:25.384: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-9j2sg Jan 30 01:33:25.384: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-9psf2 Jan 30 01:33:25.384: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 30 01:33:25.384: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 30 01:33:25.384: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 30 01:33:25.384: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 30 01:33:25.384: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 30 01:33:25.384: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 30 01:33:25.384: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 30 01:33:25.384: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 30 01:33:25.384: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 30 01:33:25.384: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 30 01:33:25.384: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 01:33:25.384: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 30 01:33:25.384: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 30 01:33:25.384: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 30 01:33:25.384: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 30 01:33:25.384: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(548d8a4d412ea624192633f425ca8149) Jan 30 01:33:25.384: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_be709c60-2a3f-4849-ab64-ecee40b17104 became leader Jan 30 01:33:25.384: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_0d612d5f-2e3f-4a93-a500-bf3745a493f8 became leader Jan 30 01:33:25.384: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_de6ee059-96ba-4c3d-bfde-95cf9e7419b1 became leader Jan 30 01:33:25.384: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_981f6359-bd6e-4ea5-8b97-1399424ecde9 became leader Jan 30 01:33:25.384: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_e45780c5-7c56-43d2-bee3-3b5de7a3ce4e became leader Jan 30 01:33:25.384: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_38a2ca4d-d586-469a-9141-6e91cfbf3c0e became leader Jan 30 01:33:25.384: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_68e8b07d-1e8d-44d3-a16a-ac4fb54b83fd became leader Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-x6fsx to bootstrap-e2e-minion-group-dx3p Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 2.673140332s (2.673153883s including waiting) Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-x6fsx_kube-system(316ca4a7-6c99-481e-a0ff-1766a6a888be) Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-x6fsx Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-x6fsx_kube-system(316ca4a7-6c99-481e-a0ff-1766a6a888be) Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container autoscaler Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-x6fsx_kube-system(316ca4a7-6c99-481e-a0ff-1766a6a888be) Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-x6fsx Jan 30 01:33:25.384: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-bt6j_kube-system(6671c8c6e4e16a3c254833ebe19049da) Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-bt6j_kube-system(6671c8c6e4e16a3c254833ebe19049da) Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-bt6j_kube-system(6671c8c6e4e16a3c254833ebe19049da) Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-dx3p_kube-system(cdadd6623acbd4ce0baf8d2112f24c5c) Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-dx3p_kube-system(cdadd6623acbd4ce0baf8d2112f24c5c) Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hkv2_kube-system(9c65fc331fb8e465e8ca146aedb85821) Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hkv2_kube-system(9c65fc331fb8e465e8ca146aedb85821) Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container kube-proxy Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hkv2_kube-system(9c65fc331fb8e465e8ca146aedb85821) Jan 30 01:33:25.384: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:33:25.384: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 30 01:33:25.384: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 30 01:33:25.384: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(ecb5a5dcd22e71f77775e7d311196ff2) Jan 30 01:33:25.384: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 30 01:33:25.384: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_46660e88-d124-4363-8950-417bf47fc5ec became leader Jan 30 01:33:25.384: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_4963297b-5fc0-4e05-bc1c-8c1650a00819 became leader Jan 30 01:33:25.384: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_afbdfa26-5274-47a0-9831-809769f20f6c became leader Jan 30 01:33:25.384: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ab8bcd4c-da94-4a00-bdb9-4647d5d24710 became leader Jan 30 01:33:25.384: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_f794b2d6-a10a-4d64-bc1d-5b73f901cfdf became leader Jan 30 01:33:25.384: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_158f4c8c-6cc2-471c-a7ac-c6d9ae954f8e became leader Jan 30 01:33:25.384: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_f6cd8b96-575a-4c71-a4dc-e242e928b304 became leader Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-9cjjm to bootstrap-e2e-minion-group-dx3p Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 591.563236ms (591.574604ms including waiting) Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container default-http-backend Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container default-http-backend Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-9cjjm Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container default-http-backend Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container default-http-backend Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container default-http-backend Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container default-http-backend Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "http://10.64.3.39:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 30 01:33:25.384: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-9cjjm Jan 30 01:33:25.384: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 30 01:33:25.384: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 30 01:33:25.384: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 30 01:33:25.384: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 30 01:33:25.384: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 30 01:33:25.384: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://10.138.0.2:8086/healthz": dial tcp 10.138.0.2:8086: connect: connection refused Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-6t4zl to bootstrap-e2e-minion-group-dx3p Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 879.557706ms (879.569889ms including waiting) Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.047520415s (2.047529815s including waiting) Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-jc4vr to bootstrap-e2e-minion-group-hkv2 Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 780.744165ms (780.763471ms including waiting) Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.829700722s (1.829716599s including waiting) Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-jc4vr: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-mrhx2 to bootstrap-e2e-minion-group-bt6j Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 714.937195ms (714.950733ms including waiting) Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.784882746s (1.784891187s including waiting) Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-mrhx2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-qndlb to bootstrap-e2e-master Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 800.897673ms (800.905188ms including waiting) Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.422135272s (2.422143281s including waiting) Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-mrhx2 Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-qndlb Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-jc4vr Jan 30 01:33:25.384: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-6t4zl Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-6btrg to bootstrap-e2e-minion-group-dx3p Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.86733657s (2.867346928s including waiting) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.963636058s (3.963650732s including waiting) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Readiness probe failed: Get "https://10.64.3.9:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "https://10.64.3.9:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-6btrg Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-6btrg Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-jpr66 to bootstrap-e2e-minion-group-bt6j Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.300742334s (1.300752166s including waiting) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 960.128176ms (960.148415ms including waiting) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": dial tcp 10.64.0.3:10250: connect: connection refused Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": dial tcp 10.64.0.3:10250: connect: connection refused Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Liveness probe failed: HTTP probe failed with statuscode: 500 Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.10:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Liveness probe failed: Get "https://10.64.0.10:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Container metrics-server failed liveness probe, will be restarted Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.10:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.10:10250/readyz": dial tcp 10.64.0.10:10250: connect: connection refused Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-jpr66 Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.13:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Liveness probe failed: Get "https://10.64.0.13:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.13:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container metrics-server Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container metrics-server-nanny Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-jpr66_kube-system(0b345cb2-f3c4-4728-8749-e13c49e0d5b6) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-jpr66_kube-system(0b345cb2-f3c4-4728-8749-e13c49e0d5b6) Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-jpr66 Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 30 01:33:25.384: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-dx3p Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.442535419s (2.442561914s including waiting) Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container volume-snapshot-controller Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container volume-snapshot-controller Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container volume-snapshot-controller Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(7029d163-353e-4569-b724-268397d21301) Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container volume-snapshot-controller Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container volume-snapshot-controller Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container volume-snapshot-controller Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(7029d163-353e-4569-b724-268397d21301) Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container volume-snapshot-controller Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container volume-snapshot-controller Jan 30 01:33:25.384: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container volume-snapshot-controller Jan 30 01:33:25.385: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(7029d163-353e-4569-b724-268397d21301) Jan 30 01:33:25.385: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 01:33:25.385 (59ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 01:33:25.385 Jan 30 01:33:25.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 30 01:33:25.431: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:27.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:29.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:31.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:33.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:35.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:37.481: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:39.502: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:41.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:43.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:45.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:47.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:49.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:51.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:53.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:55.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:57.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:33:59.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:01.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:03.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:05.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:07.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:09.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:11.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:13.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:15.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:17.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:19.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:21.482: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure Jan 30 01:34:23.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:28:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:28:27 +0000 UTC}]. Failure < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 01:34:25.477 (1m0.093s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 01:34:25.477 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 01:34:25.477 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 01:34:25.477 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 01:34:25.477 STEP: Collecting events from namespace "reboot-3498". - test/e2e/framework/debug/dump.go:42 @ 01/30/23 01:34:25.478 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/30/23 01:34:25.518 Jan 30 01:34:25.560: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 01:34:25.560: INFO: Jan 30 01:34:25.605: INFO: Logging node info for node bootstrap-e2e-master Jan 30 01:34:25.647: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 7a2bd2df-fc42-4d55-8404-5b2a0412e072 2976 0 2023-01-30 01:04:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 01:04:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-30 01:04:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-30 01:04:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 01:31:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-upgrade/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 01:04:39 +0000 UTC,LastTransitionTime:2023-01-30 01:04:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 01:31:19 +0000 UTC,LastTransitionTime:2023-01-30 01:04:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 01:31:19 +0000 UTC,LastTransitionTime:2023-01-30 01:04:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 01:31:19 +0000 UTC,LastTransitionTime:2023-01-30 01:04:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 01:31:19 +0000 UTC,LastTransitionTime:2023-01-30 01:04:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.82.184.40,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gce-upgrade.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gce-upgrade.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5736e6f149167618f71cd530dafef4cc,SystemUUID:5736e6f1-4916-7618-f71c-d530dafef4cc,BootID:fe689329-330a-4af4-8223-73b99031148e,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,KubeProxyVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:135961043,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:125279031,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:57551672,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 01:34:25.647: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 30 01:34:25.694: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 30 01:34:25.756: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:25.757: INFO: Container etcd-container ready: true, restart count 0 Jan 30 01:34:25.757: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:25.757: INFO: Container kube-apiserver ready: true, restart count 1 Jan 30 01:34:25.757: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:25.757: INFO: Container kube-controller-manager ready: true, restart count 8 Jan 30 01:34:25.757: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-30 01:03:55 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:25.757: INFO: Container kube-addon-manager ready: true, restart count 3 Jan 30 01:34:25.757: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:25.757: INFO: Container etcd-container ready: true, restart count 5 Jan 30 01:34:25.757: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:25.757: INFO: Container konnectivity-server-container ready: true, restart count 2 Jan 30 01:34:25.757: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:25.757: INFO: Container kube-scheduler ready: true, restart count 7 Jan 30 01:34:25.757: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-30 01:03:55 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:25.757: INFO: Container l7-lb-controller ready: true, restart count 9 Jan 30 01:34:25.757: INFO: metadata-proxy-v0.1-qndlb started at 2023-01-30 01:04:22 +0000 UTC (0+2 container statuses recorded) Jan 30 01:34:25.757: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 01:34:25.757: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 01:34:25.950: INFO: Latency metrics for node bootstrap-e2e-master Jan 30 01:34:25.950: INFO: Logging node info for node bootstrap-e2e-minion-group-bt6j Jan 30 01:34:25.993: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-bt6j efad890a-089b-40bf-b3d0-1106dec194f4 3167 0 2023-01-30 01:04:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-bt6j kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 01:04:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 01:28:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-30 01:29:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-30 01:29:43 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {kube-controller-manager Update v1 2023-01-30 01:34:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} }]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-upgrade/us-west1-b/bootstrap-e2e-minion-group-bt6j,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 01:04:39 +0000 UTC,LastTransitionTime:2023-01-30 01:04:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:29:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:29:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:29:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:29:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.46.206,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-bt6j.c.k8s-jkns-gce-upgrade.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-bt6j.c.k8s-jkns-gce-upgrade.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f7ade72fddb4327ba8b5c5a9c07f04c,SystemUUID:3f7ade72-fddb-4327-ba8b-5c5a9c07f04c,BootID:e145d8d8-8bdd-40a3-b85d-a02004edfa80,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,KubeProxyVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 01:34:25.993: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-bt6j Jan 30 01:34:26.040: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-bt6j Jan 30 01:34:26.107: INFO: kube-proxy-bootstrap-e2e-minion-group-bt6j started at 2023-01-30 01:04:20 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.107: INFO: Container kube-proxy ready: true, restart count 7 Jan 30 01:34:26.107: INFO: metadata-proxy-v0.1-mrhx2 started at 2023-01-30 01:04:21 +0000 UTC (0+2 container statuses recorded) Jan 30 01:34:26.107: INFO: Container metadata-proxy ready: true, restart count 2 Jan 30 01:34:26.107: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 30 01:34:26.107: INFO: konnectivity-agent-9j2sg started at 2023-01-30 01:04:40 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.107: INFO: Container konnectivity-agent ready: true, restart count 9 Jan 30 01:34:26.107: INFO: metrics-server-v0.5.2-867b8754b9-jpr66 started at 2023-01-30 01:05:48 +0000 UTC (0+2 container statuses recorded) Jan 30 01:34:26.107: INFO: Container metrics-server ready: false, restart count 11 Jan 30 01:34:26.107: INFO: Container metrics-server-nanny ready: false, restart count 9 Jan 30 01:34:26.274: INFO: Latency metrics for node bootstrap-e2e-minion-group-bt6j Jan 30 01:34:26.274: INFO: Logging node info for node bootstrap-e2e-minion-group-dx3p Jan 30 01:34:26.317: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-dx3p 97ee0a06-78c0-423b-b6ac-5763006307f0 2975 0 2023-01-30 01:04:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-dx3p kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 01:04:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 01:13:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 01:21:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-30 01:30:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-30 01:31:17 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-upgrade/us-west1-b/bootstrap-e2e-minion-group-dx3p,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:41 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 01:04:39 +0000 UTC,LastTransitionTime:2023-01-30 01:04:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 01:31:17 +0000 UTC,LastTransitionTime:2023-01-30 01:14:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 01:31:17 +0000 UTC,LastTransitionTime:2023-01-30 01:14:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 01:31:17 +0000 UTC,LastTransitionTime:2023-01-30 01:14:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 01:31:17 +0000 UTC,LastTransitionTime:2023-01-30 01:21:07 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.145.43.138,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-dx3p.c.k8s-jkns-gce-upgrade.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-dx3p.c.k8s-jkns-gce-upgrade.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:04cfc971fb8b0e96ce2e62a783445108,SystemUUID:04cfc971-fb8b-0e96-ce2e-62a783445108,BootID:9b9d29fb-6452-40ca-80e4-4ded665f8322,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,KubeProxyVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 01:34:26.318: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-dx3p Jan 30 01:34:26.366: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-dx3p Jan 30 01:34:26.433: INFO: kube-proxy-bootstrap-e2e-minion-group-dx3p started at 2023-01-30 01:04:27 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.433: INFO: Container kube-proxy ready: true, restart count 8 Jan 30 01:34:26.433: INFO: l7-default-backend-8549d69d99-9cjjm started at 2023-01-30 01:04:39 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.433: INFO: Container default-http-backend ready: true, restart count 4 Jan 30 01:34:26.433: INFO: volume-snapshot-controller-0 started at 2023-01-30 01:04:39 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.433: INFO: Container volume-snapshot-controller ready: false, restart count 15 Jan 30 01:34:26.433: INFO: coredns-6846b5b5f-wfgss started at 2023-01-30 01:04:39 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.433: INFO: Container coredns ready: false, restart count 6 Jan 30 01:34:26.433: INFO: kube-dns-autoscaler-5f6455f985-x6fsx started at 2023-01-30 01:04:39 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.433: INFO: Container autoscaler ready: true, restart count 10 Jan 30 01:34:26.433: INFO: metadata-proxy-v0.1-6t4zl started at 2023-01-30 01:04:28 +0000 UTC (0+2 container statuses recorded) Jan 30 01:34:26.433: INFO: Container metadata-proxy ready: true, restart count 2 Jan 30 01:34:26.433: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 30 01:34:26.433: INFO: konnectivity-agent-8dmqc started at 2023-01-30 01:04:39 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.433: INFO: Container konnectivity-agent ready: false, restart count 9 Jan 30 01:34:26.600: INFO: Latency metrics for node bootstrap-e2e-minion-group-dx3p Jan 30 01:34:26.600: INFO: Logging node info for node bootstrap-e2e-minion-group-hkv2 Jan 30 01:34:26.643: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-hkv2 e09d3248-9b99-4af7-a475-ce2f98c7c753 3158 0 2023-01-30 01:04:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-hkv2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 01:04:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 01:28:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-30 01:28:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 01:29:40 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-30 01:29:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-upgrade/us-west1-b/bootstrap-e2e-minion-group-hkv2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:18:11 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:18:11 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:18:11 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:18:11 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:18:11 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:18:11 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 01:29:43 +0000 UTC,LastTransitionTime:2023-01-30 01:18:11 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 01:04:39 +0000 UTC,LastTransitionTime:2023-01-30 01:04:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 01:29:40 +0000 UTC,LastTransitionTime:2023-01-30 01:29:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 01:29:40 +0000 UTC,LastTransitionTime:2023-01-30 01:29:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 01:29:40 +0000 UTC,LastTransitionTime:2023-01-30 01:29:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 01:29:40 +0000 UTC,LastTransitionTime:2023-01-30 01:29:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.9.96,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-hkv2.c.k8s-jkns-gce-upgrade.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-hkv2.c.k8s-jkns-gce-upgrade.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6477a1a0d081fbd58d469fc57fe2da0f,SystemUUID:6477a1a0-d081-fbd5-8d46-9fc57fe2da0f,BootID:4fe9f4b7-cf7b-4f13-a1b8-cde7d10f2058,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,KubeProxyVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 01:34:26.643: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-hkv2 Jan 30 01:34:26.691: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-hkv2 Jan 30 01:34:26.756: INFO: kube-proxy-bootstrap-e2e-minion-group-hkv2 started at 2023-01-30 01:04:23 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.756: INFO: Container kube-proxy ready: false, restart count 12 Jan 30 01:34:26.756: INFO: metadata-proxy-v0.1-jc4vr started at 2023-01-30 01:04:24 +0000 UTC (0+2 container statuses recorded) Jan 30 01:34:26.756: INFO: Container metadata-proxy ready: true, restart count 2 Jan 30 01:34:26.756: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 30 01:34:26.756: INFO: konnectivity-agent-9psf2 started at 2023-01-30 01:04:40 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.756: INFO: Container konnectivity-agent ready: true, restart count 8 Jan 30 01:34:26.756: INFO: coredns-6846b5b5f-ftgx9 started at 2023-01-30 01:04:47 +0000 UTC (0+1 container statuses recorded) Jan 30 01:34:26.756: INFO: Container coredns ready: true, restart count 6 Jan 30 01:34:26.927: INFO: Latency metrics for node bootstrap-e2e-minion-group-hkv2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 01:34:26.927 (1.45s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 01:34:26.927 (1.45s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 01:34:26.927 STEP: Destroying namespace "reboot-3498" for this suite. - test/e2e/framework/framework.go:347 @ 01/30/23 01:34:26.927 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 01:34:26.971 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 01:34:26.971 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 01:34:26.971 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 01:36:39.756 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 01:34:27.085 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 01:34:27.085 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 01:34:27.085 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/30/23 01:34:27.085 Jan 30 01:34:27.085: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/30/23 01:34:27.086 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/30/23 01:34:27.213 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/30/23 01:34:27.294 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 01:34:27.382 (297ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 01:34:27.382 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 01:34:27.382 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/30/23 01:34:27.382 Jan 30 01:34:27.530: INFO: Getting bootstrap-e2e-minion-group-bt6j Jan 30 01:34:27.530: INFO: Getting bootstrap-e2e-minion-group-dx3p Jan 30 01:34:27.530: INFO: Getting bootstrap-e2e-minion-group-hkv2 Jan 30 01:34:27.579: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-hkv2 condition Ready to be true Jan 30 01:34:27.579: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-dx3p condition Ready to be true Jan 30 01:34:27.579: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-bt6j condition Ready to be true Jan 30 01:34:27.624: INFO: Node bootstrap-e2e-minion-group-hkv2 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-jc4vr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:34:27.624: INFO: Node bootstrap-e2e-minion-group-dx3p has 4 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-x6fsx] Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-x6fsx] Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-x6fsx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:34:27.624: INFO: Node bootstrap-e2e-minion-group-bt6j has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-bt6j metadata-proxy-v0.1-mrhx2] Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-bt6j metadata-proxy-v0.1-mrhx2] Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hkv2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mrhx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-6t4zl" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-dx3p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:34:27.625: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:34:27.625: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:34:27.676: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=true. Elapsed: 51.484321ms Jan 30 01:34:27.676: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx" satisfied condition "running and ready, or succeeded" Jan 30 01:34:27.676: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 51.262871ms Jan 30 01:34:27.676: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:27.677: INFO: Pod "metadata-proxy-v0.1-6t4zl": Phase="Running", Reason="", readiness=true. Elapsed: 52.384319ms Jan 30 01:34:27.677: INFO: Pod "metadata-proxy-v0.1-6t4zl" satisfied condition "running and ready, or succeeded" Jan 30 01:34:27.677: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 52.519053ms Jan 30 01:34:27.677: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:27.677: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j": Phase="Running", Reason="", readiness=true. Elapsed: 52.408639ms Jan 30 01:34:27.677: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" satisfied condition "running and ready, or succeeded" Jan 30 01:34:27.677: INFO: Pod "metadata-proxy-v0.1-jc4vr": Phase="Running", Reason="", readiness=true. Elapsed: 52.902354ms Jan 30 01:34:27.677: INFO: Pod "metadata-proxy-v0.1-jc4vr" satisfied condition "running and ready, or succeeded" Jan 30 01:34:27.677: INFO: Pod "metadata-proxy-v0.1-mrhx2": Phase="Running", Reason="", readiness=true. Elapsed: 52.708562ms Jan 30 01:34:27.677: INFO: Pod "metadata-proxy-v0.1-mrhx2" satisfied condition "running and ready, or succeeded" Jan 30 01:34:27.677: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-bt6j metadata-proxy-v0.1-mrhx2] Jan 30 01:34:27.677: INFO: Getting external IP address for bootstrap-e2e-minion-group-bt6j Jan 30 01:34:27.677: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-bt6j(35.197.46.206:22) Jan 30 01:34:27.677: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dx3p": Phase="Running", Reason="", readiness=true. Elapsed: 53.003558ms Jan 30 01:34:27.677: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dx3p" satisfied condition "running and ready, or succeeded" Jan 30 01:34:28.199: INFO: ssh prow@35.197.46.206:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 30 01:34:28.199: INFO: ssh prow@35.197.46.206:22: stdout: "" Jan 30 01:34:28.199: INFO: ssh prow@35.197.46.206:22: stderr: "" Jan 30 01:34:28.199: INFO: ssh prow@35.197.46.206:22: exit code: 0 Jan 30 01:34:28.199: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-bt6j condition Ready to be false Jan 30 01:34:28.242: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:29.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.098698599s Jan 30 01:34:29.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:29.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 2.101574592s Jan 30 01:34:29.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:30.285: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:31.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.099692372s Jan 30 01:34:31.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:31.725: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 4.1009875s Jan 30 01:34:31.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:32.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:33.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.099761412s Jan 30 01:34:33.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:33.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 6.101812786s Jan 30 01:34:33.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:34.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:35.729: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.104489732s Jan 30 01:34:35.729: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:35.732: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 8.107247557s Jan 30 01:34:35.732: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:36.417: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:37.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.100945596s Jan 30 01:34:37.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:37.727: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 10.102164078s Jan 30 01:34:37.727: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:38.461: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:39.733: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 12.108724776s Jan 30 01:34:39.733: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.108579133s Jan 30 01:34:39.733: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:39.733: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:40.505: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:41.722: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.097842024s Jan 30 01:34:41.722: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:41.724: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 14.099938426s Jan 30 01:34:41.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:42.549: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:43.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.099707406s Jan 30 01:34:43.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:43.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 16.101435241s Jan 30 01:34:43.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:44.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:45.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.099771253s Jan 30 01:34:45.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:45.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 18.101221795s Jan 30 01:34:45.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:46.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:47.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.100713816s Jan 30 01:34:47.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:47.727: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 20.102486691s Jan 30 01:34:47.727: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:48.679: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:49.717: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.092952981s Jan 30 01:34:49.718: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:49.725: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 22.101037617s Jan 30 01:34:49.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:50.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:51.722: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.097492254s Jan 30 01:34:51.722: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:51.725: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 24.100140326s Jan 30 01:34:51.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:52.766: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:53.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.099887126s Jan 30 01:34:53.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:53.725: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 26.101008977s Jan 30 01:34:53.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:54.809: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:55.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.099732183s Jan 30 01:34:55.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:55.725: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 28.100863433s Jan 30 01:34:55.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:56.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:57.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.099913435s Jan 30 01:34:57.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:57.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 30.10188293s Jan 30 01:34:57.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:58.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:59.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.098034089s Jan 30 01:34:59.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:59.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 32.101971145s Jan 30 01:34:59.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:00.937: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:01.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.100438021s Jan 30 01:35:01.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:01.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 34.101780487s Jan 30 01:35:01.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:02.981: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:03.726: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.101670032s Jan 30 01:35:03.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:03.728: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 36.103432529s Jan 30 01:35:03.728: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:05.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:05.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.09946834s Jan 30 01:35:05.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:05.728: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 38.10327414s Jan 30 01:35:05.728: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:07.108: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:07.727: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.102093716s Jan 30 01:35:07.727: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:07.727: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 40.102307004s Jan 30 01:35:07.727: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:09.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:09.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.100455089s Jan 30 01:35:09.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:09.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 42.102088448s Jan 30 01:35:09.727: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:11.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:11.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.098454388s Jan 30 01:35:11.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:11.725: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 44.1010145s Jan 30 01:35:11.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:13.241: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:13.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.099848646s Jan 30 01:35:13.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:13.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 46.101187938s Jan 30 01:35:13.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:15.285: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:15.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.099934117s Jan 30 01:35:15.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:15.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 48.101477143s Jan 30 01:35:15.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:17.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:17.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.100893603s Jan 30 01:35:17.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:17.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 50.102102805s Jan 30 01:35:17.727: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:19.373: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-bt6j condition Ready to be true Jan 30 01:35:19.415: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:35:19.726: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.101378888s Jan 30 01:35:19.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:19.729: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 52.104271139s Jan 30 01:35:19.729: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:21.458: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:35:21.719: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.094664958s Jan 30 01:35:21.719: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:21.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 54.10131368s Jan 30 01:35:21.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:23.501: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:35:23.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.098139489s Jan 30 01:35:23.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:23.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 56.101444362s Jan 30 01:35:23.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:25.544: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:25.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.099431795s Jan 30 01:35:25.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:25.725: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 58.100751718s Jan 30 01:35:25.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:27.594: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:27.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.101550787s Jan 30 01:35:27.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:27.726: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.101453188s Jan 30 01:35:27.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:29.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:29.736: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.111899131s Jan 30 01:35:29.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:29.738: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.113360851s Jan 30 01:35:29.738: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:31.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:31.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.100461222s Jan 30 01:35:31.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:31.725: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.100761458s Jan 30 01:35:31.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:33.727: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.10202718s Jan 30 01:35:33.727: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:33.728: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.103490312s Jan 30 01:35:33.728: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:33.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:35.726: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.101128878s Jan 30 01:35:35.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:35.729: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.104296739s Jan 30 01:35:35.729: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:35.784: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:37.745: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.120292344s Jan 30 01:35:37.745: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:37.776: INFO: Encountered non-retryable error while getting pod kube-system/kube-proxy-bootstrap-e2e-minion-group-hkv2: rpc error: code = Unknown desc = malformed header: missing HTTP content-type Jan 30 01:35:37.776: INFO: Pod kube-proxy-bootstrap-e2e-minion-group-hkv2 failed to be running and ready, or succeeded. Jan 30 01:35:37.776: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:35:37.776: INFO: Status for not ready pod kube-system/kube-proxy-bootstrap-e2e-minion-group-hkv2: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 01:04:23 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 01:34:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 01:34:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 01:04:23 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP:10.138.0.3 PodIPs:[{IP:10.138.0.3}] StartTime:2023-01-30 01:04:23 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-bootstrap-e2e-minion-group-hkv2_kube-system(9c65fc331fb8e465e8ca146aedb85821),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-30 01:32:45 +0000 UTC,FinishedAt:2023-01-30 01:34:12 +0000 UTC,ContainerID:containerd://61685857b547af6cbfe580bde5ec5a0405cdd57ebebd3eb5e037e58f7ba65ace,}} Ready:false RestartCount:12 Image:registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5 ImageID:sha256:97e90610d7e0d0de64982e40bd97082056a6202717ee03cc0440e25e2723664b ContainerID:containerd://61685857b547af6cbfe580bde5ec5a0405cdd57ebebd3eb5e037e58f7ba65ace Started:0xc0022891ff}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 30 01:35:40.991: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:41.019: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m13.394382493s Jan 30 01:35:41.019: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:41.050: INFO: Retrieving log for container kube-system/kube-proxy-bootstrap-e2e-minion-group-hkv2/kube-proxy, err: an error on the server ("unknown") has prevented the request from succeeding (get pods kube-proxy-bootstrap-e2e-minion-group-hkv2): Jan 30 01:35:41.050: INFO: Retrieving log for the last terminated container kube-system/kube-proxy-bootstrap-e2e-minion-group-hkv2/kube-proxy, err: an error on the server ("unknown") has prevented the request from succeeding (get pods kube-proxy-bootstrap-e2e-minion-group-hkv2): Jan 30 01:35:41.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.099609328s Jan 30 01:35:41.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:43.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:43.722: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.097804254s Jan 30 01:35:43.722: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:45.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:45.719: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.094882605s Jan 30 01:35:45.719: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:47.122: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:47.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.098851153s Jan 30 01:35:47.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:49.167: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:49.728: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.103144172s Jan 30 01:35:49.728: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:51.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:51.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.098793541s Jan 30 01:35:51.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:53.255: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:53.726: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.10155029s Jan 30 01:35:53.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:55.298: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-bt6j metadata-proxy-v0.1-mrhx2] Jan 30 01:35:55.298: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mrhx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:35:55.299: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:35:55.342: INFO: Pod "metadata-proxy-v0.1-mrhx2": Phase="Running", Reason="", readiness=false. Elapsed: 43.923063ms Jan 30 01:35:55.342: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j": Phase="Running", Reason="", readiness=false. Elapsed: 43.630914ms Jan 30 01:35:55.342: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-mrhx2' on 'bootstrap-e2e-minion-group-bt6j' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:35:18 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:35:53 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:20 +0000 UTC }] Jan 30 01:35:55.342: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bt6j' on 'bootstrap-e2e-minion-group-bt6j' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:35:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:28:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:20 +0000 UTC }] Jan 30 01:35:55.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.098447768s Jan 30 01:35:55.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:57.395: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j": Phase="Running", Reason="", readiness=true. Elapsed: 2.096535323s Jan 30 01:35:57.395: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" satisfied condition "running and ready, or succeeded" Jan 30 01:35:57.399: INFO: Pod "metadata-proxy-v0.1-mrhx2": Phase="Running", Reason="", readiness=true. Elapsed: 2.10072075s Jan 30 01:35:57.399: INFO: Pod "metadata-proxy-v0.1-mrhx2" satisfied condition "running and ready, or succeeded" Jan 30 01:35:57.399: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-bt6j metadata-proxy-v0.1-mrhx2] Jan 30 01:35:57.399: INFO: Reboot successful on node bootstrap-e2e-minion-group-bt6j Jan 30 01:35:57.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.099714597s Jan 30 01:35:57.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:59.722: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.097949642s Jan 30 01:35:59.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:01.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.098347834s Jan 30 01:36:01.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:03.726: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.101013632s Jan 30 01:36:03.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:05.718: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.093596949s Jan 30 01:36:05.718: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:07.729: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.104273786s Jan 30 01:36:07.729: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:09.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.09863148s Jan 30 01:36:09.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:11.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.099019251s Jan 30 01:36:11.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:13.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.099459997s Jan 30 01:36:13.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:15.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.099266745s Jan 30 01:36:15.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:17.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.099359575s Jan 30 01:36:17.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:19.726: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.101225295s Jan 30 01:36:19.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:21.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.100128911s Jan 30 01:36:21.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:23.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.099313082s Jan 30 01:36:23.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:25.733: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.108110158s Jan 30 01:36:25.733: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:27.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.098961412s Jan 30 01:36:27.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:29.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.100899984s Jan 30 01:36:29.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:31.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.100009726s Jan 30 01:36:31.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:33.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.098913761s Jan 30 01:36:33.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:35.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.099934794s Jan 30 01:36:35.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:37.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.100947212s Jan 30 01:36:37.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:39.716: INFO: Encountered non-retryable error while getting pod kube-system/volume-snapshot-controller-0: Get "https://34.82.184.40/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0": dial tcp 34.82.184.40:443: connect: connection refused Jan 30 01:36:39.716: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 30 01:36:39.716: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-x6fsx] Jan 30 01:36:39.716: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 01:04:39 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 01:33:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 01:33:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 01:04:39 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.3.55 PodIPs:[{IP:10.64.3.55}] StartTime:2023-01-30 01:04:39 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 5m0s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(7029d163-353e-4569-b724-268397d21301),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-30 01:32:08 +0000 UTC,FinishedAt:2023-01-30 01:33:23 +0000 UTC,ContainerID:containerd://e2ca26a4d5a1bd42c345e1e8c8216aa72424aae9ef931bcacecffbca5d58637f,}} Ready:false RestartCount:15 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://e2ca26a4d5a1bd42c345e1e8c8216aa72424aae9ef931bcacecffbca5d58637f Started:0xc00228883f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 30 01:36:39.756: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://34.82.184.40/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 34.82.184.40:443: connect: connection refused: Jan 30 01:36:39.756: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://34.82.184.40/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 34.82.184.40:443: connect: connection refused: Jan 30 01:36:39.756: INFO: Node bootstrap-e2e-minion-group-dx3p failed reboot test. Jan 30 01:36:39.756: INFO: Node bootstrap-e2e-minion-group-hkv2 failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 01:36:39.756 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/30/23 01:36:39.756 (2m12.375s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 01:36:39.756 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/30/23 01:36:39.757 Jan 30 01:36:39.796: INFO: Unexpected error: <*url.Error | 0xc0053a0150>: { Op: "Get", URL: "https://34.82.184.40/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc0052800f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0023ffb90>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 184, 40], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001470040>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://34.82.184.40/api/v1/namespaces/kube-system/events": dial tcp 34.82.184.40:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/30/23 01:36:39.796 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 01:36:39.796 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 01:36:39.797 Jan 30 01:36:39.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 01:36:39.836 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 01:36:39.836 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 01:36:39.836 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 01:36:39.836 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 01:36:39.836 STEP: Collecting events from namespace "reboot-5766". - test/e2e/framework/debug/dump.go:42 @ 01/30/23 01:36:39.836 Jan 30 01:36:39.876: INFO: Unexpected error: failed to list events in namespace "reboot-5766": <*url.Error | 0xc00381b320>: { Op: "Get", URL: "https://34.82.184.40/api/v1/namespaces/reboot-5766/events", Err: <*net.OpError | 0xc005237360>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003950540>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 184, 40], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc005a2a500>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 01:36:39.876 (40ms) [FAILED] failed to list events in namespace "reboot-5766": Get "https://34.82.184.40/api/v1/namespaces/reboot-5766/events": dial tcp 34.82.184.40:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/30/23 01:36:39.876 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 01:36:39.876 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 01:36:39.876 STEP: Destroying namespace "reboot-5766" for this suite. - test/e2e/framework/framework.go:347 @ 01/30/23 01:36:39.876 [FAILED] Couldn't delete ns: "reboot-5766": Delete "https://34.82.184.40/api/v1/namespaces/reboot-5766": dial tcp 34.82.184.40:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.184.40/api/v1/namespaces/reboot-5766", Err:(*net.OpError)(0xc0053f2640)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/30/23 01:36:39.916 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 01:36:39.916 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 01:36:39.917 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 01:36:39.917 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 01:36:39.756 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 01:34:27.085 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 01:34:27.085 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 01:34:27.085 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/30/23 01:34:27.085 Jan 30 01:34:27.085: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/30/23 01:34:27.086 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/30/23 01:34:27.213 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/30/23 01:34:27.294 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 01:34:27.382 (297ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 01:34:27.382 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 01:34:27.382 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/30/23 01:34:27.382 Jan 30 01:34:27.530: INFO: Getting bootstrap-e2e-minion-group-bt6j Jan 30 01:34:27.530: INFO: Getting bootstrap-e2e-minion-group-dx3p Jan 30 01:34:27.530: INFO: Getting bootstrap-e2e-minion-group-hkv2 Jan 30 01:34:27.579: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-hkv2 condition Ready to be true Jan 30 01:34:27.579: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-dx3p condition Ready to be true Jan 30 01:34:27.579: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-bt6j condition Ready to be true Jan 30 01:34:27.624: INFO: Node bootstrap-e2e-minion-group-hkv2 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-jc4vr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:34:27.624: INFO: Node bootstrap-e2e-minion-group-dx3p has 4 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-x6fsx] Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-x6fsx] Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-x6fsx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:34:27.624: INFO: Node bootstrap-e2e-minion-group-bt6j has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-bt6j metadata-proxy-v0.1-mrhx2] Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-bt6j metadata-proxy-v0.1-mrhx2] Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hkv2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mrhx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-6t4zl" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:34:27.624: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-dx3p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:34:27.625: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:34:27.625: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:34:27.676: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=true. Elapsed: 51.484321ms Jan 30 01:34:27.676: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx" satisfied condition "running and ready, or succeeded" Jan 30 01:34:27.676: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 51.262871ms Jan 30 01:34:27.676: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:27.677: INFO: Pod "metadata-proxy-v0.1-6t4zl": Phase="Running", Reason="", readiness=true. Elapsed: 52.384319ms Jan 30 01:34:27.677: INFO: Pod "metadata-proxy-v0.1-6t4zl" satisfied condition "running and ready, or succeeded" Jan 30 01:34:27.677: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 52.519053ms Jan 30 01:34:27.677: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:27.677: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j": Phase="Running", Reason="", readiness=true. Elapsed: 52.408639ms Jan 30 01:34:27.677: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" satisfied condition "running and ready, or succeeded" Jan 30 01:34:27.677: INFO: Pod "metadata-proxy-v0.1-jc4vr": Phase="Running", Reason="", readiness=true. Elapsed: 52.902354ms Jan 30 01:34:27.677: INFO: Pod "metadata-proxy-v0.1-jc4vr" satisfied condition "running and ready, or succeeded" Jan 30 01:34:27.677: INFO: Pod "metadata-proxy-v0.1-mrhx2": Phase="Running", Reason="", readiness=true. Elapsed: 52.708562ms Jan 30 01:34:27.677: INFO: Pod "metadata-proxy-v0.1-mrhx2" satisfied condition "running and ready, or succeeded" Jan 30 01:34:27.677: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-bt6j metadata-proxy-v0.1-mrhx2] Jan 30 01:34:27.677: INFO: Getting external IP address for bootstrap-e2e-minion-group-bt6j Jan 30 01:34:27.677: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-bt6j(35.197.46.206:22) Jan 30 01:34:27.677: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dx3p": Phase="Running", Reason="", readiness=true. Elapsed: 53.003558ms Jan 30 01:34:27.677: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dx3p" satisfied condition "running and ready, or succeeded" Jan 30 01:34:28.199: INFO: ssh prow@35.197.46.206:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 30 01:34:28.199: INFO: ssh prow@35.197.46.206:22: stdout: "" Jan 30 01:34:28.199: INFO: ssh prow@35.197.46.206:22: stderr: "" Jan 30 01:34:28.199: INFO: ssh prow@35.197.46.206:22: exit code: 0 Jan 30 01:34:28.199: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-bt6j condition Ready to be false Jan 30 01:34:28.242: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:29.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.098698599s Jan 30 01:34:29.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:29.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 2.101574592s Jan 30 01:34:29.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:30.285: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:31.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.099692372s Jan 30 01:34:31.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:31.725: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 4.1009875s Jan 30 01:34:31.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:32.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:33.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.099761412s Jan 30 01:34:33.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:33.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 6.101812786s Jan 30 01:34:33.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:34.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:35.729: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.104489732s Jan 30 01:34:35.729: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:35.732: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 8.107247557s Jan 30 01:34:35.732: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:36.417: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:37.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.100945596s Jan 30 01:34:37.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:37.727: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 10.102164078s Jan 30 01:34:37.727: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:38.461: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:39.733: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 12.108724776s Jan 30 01:34:39.733: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.108579133s Jan 30 01:34:39.733: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:39.733: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:40.505: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:41.722: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.097842024s Jan 30 01:34:41.722: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:41.724: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 14.099938426s Jan 30 01:34:41.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:42.549: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:43.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.099707406s Jan 30 01:34:43.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:43.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 16.101435241s Jan 30 01:34:43.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:44.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:45.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.099771253s Jan 30 01:34:45.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:45.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 18.101221795s Jan 30 01:34:45.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:46.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:47.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.100713816s Jan 30 01:34:47.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:47.727: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 20.102486691s Jan 30 01:34:47.727: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:48.679: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:49.717: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.092952981s Jan 30 01:34:49.718: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:49.725: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 22.101037617s Jan 30 01:34:49.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:50.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:51.722: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.097492254s Jan 30 01:34:51.722: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:51.725: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 24.100140326s Jan 30 01:34:51.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:52.766: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:53.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.099887126s Jan 30 01:34:53.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:53.725: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 26.101008977s Jan 30 01:34:53.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:54.809: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:55.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.099732183s Jan 30 01:34:55.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:55.725: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 28.100863433s Jan 30 01:34:55.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:56.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:57.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.099913435s Jan 30 01:34:57.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:57.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 30.10188293s Jan 30 01:34:57.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:34:58.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:34:59.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.098034089s Jan 30 01:34:59.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:34:59.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 32.101971145s Jan 30 01:34:59.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:00.937: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:01.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.100438021s Jan 30 01:35:01.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:01.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 34.101780487s Jan 30 01:35:01.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:02.981: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:03.726: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.101670032s Jan 30 01:35:03.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:03.728: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 36.103432529s Jan 30 01:35:03.728: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:05.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:05.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.09946834s Jan 30 01:35:05.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:05.728: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 38.10327414s Jan 30 01:35:05.728: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:07.108: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:07.727: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.102093716s Jan 30 01:35:07.727: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:07.727: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 40.102307004s Jan 30 01:35:07.727: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:09.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:09.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.100455089s Jan 30 01:35:09.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:09.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 42.102088448s Jan 30 01:35:09.727: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:11.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:11.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.098454388s Jan 30 01:35:11.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:11.725: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 44.1010145s Jan 30 01:35:11.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:13.241: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:13.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.099848646s Jan 30 01:35:13.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:13.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 46.101187938s Jan 30 01:35:13.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:15.285: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:15.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.099934117s Jan 30 01:35:15.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:15.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 48.101477143s Jan 30 01:35:15.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:17.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:35:17.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.100893603s Jan 30 01:35:17.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:17.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 50.102102805s Jan 30 01:35:17.727: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:19.373: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-bt6j condition Ready to be true Jan 30 01:35:19.415: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:35:19.726: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.101378888s Jan 30 01:35:19.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:19.729: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 52.104271139s Jan 30 01:35:19.729: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:21.458: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:35:21.719: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.094664958s Jan 30 01:35:21.719: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:21.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 54.10131368s Jan 30 01:35:21.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:23.501: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:35:23.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.098139489s Jan 30 01:35:23.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:23.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 56.101444362s Jan 30 01:35:23.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:25.544: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:25.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.099431795s Jan 30 01:35:25.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:25.725: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 58.100751718s Jan 30 01:35:25.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:27.594: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:27.726: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.101550787s Jan 30 01:35:27.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:27.726: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.101453188s Jan 30 01:35:27.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:29.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:29.736: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.111899131s Jan 30 01:35:29.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:29.738: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.113360851s Jan 30 01:35:29.738: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:31.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:31.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.100461222s Jan 30 01:35:31.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:31.725: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.100761458s Jan 30 01:35:31.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:33.727: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.10202718s Jan 30 01:35:33.727: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:33.728: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.103490312s Jan 30 01:35:33.728: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:33.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:35.726: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.101128878s Jan 30 01:35:35.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:35.729: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.104296739s Jan 30 01:35:35.729: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:34:13 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:35:35.784: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:37.745: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.120292344s Jan 30 01:35:37.745: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:37.776: INFO: Encountered non-retryable error while getting pod kube-system/kube-proxy-bootstrap-e2e-minion-group-hkv2: rpc error: code = Unknown desc = malformed header: missing HTTP content-type Jan 30 01:35:37.776: INFO: Pod kube-proxy-bootstrap-e2e-minion-group-hkv2 failed to be running and ready, or succeeded. Jan 30 01:35:37.776: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:35:37.776: INFO: Status for not ready pod kube-system/kube-proxy-bootstrap-e2e-minion-group-hkv2: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 01:04:23 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 01:34:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 01:34:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 01:04:23 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP:10.138.0.3 PodIPs:[{IP:10.138.0.3}] StartTime:2023-01-30 01:04:23 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-bootstrap-e2e-minion-group-hkv2_kube-system(9c65fc331fb8e465e8ca146aedb85821),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-30 01:32:45 +0000 UTC,FinishedAt:2023-01-30 01:34:12 +0000 UTC,ContainerID:containerd://61685857b547af6cbfe580bde5ec5a0405cdd57ebebd3eb5e037e58f7ba65ace,}} Ready:false RestartCount:12 Image:registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5 ImageID:sha256:97e90610d7e0d0de64982e40bd97082056a6202717ee03cc0440e25e2723664b ContainerID:containerd://61685857b547af6cbfe580bde5ec5a0405cdd57ebebd3eb5e037e58f7ba65ace Started:0xc0022891ff}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 30 01:35:40.991: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:41.019: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m13.394382493s Jan 30 01:35:41.019: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:41.050: INFO: Retrieving log for container kube-system/kube-proxy-bootstrap-e2e-minion-group-hkv2/kube-proxy, err: an error on the server ("unknown") has prevented the request from succeeding (get pods kube-proxy-bootstrap-e2e-minion-group-hkv2): Jan 30 01:35:41.050: INFO: Retrieving log for the last terminated container kube-system/kube-proxy-bootstrap-e2e-minion-group-hkv2/kube-proxy, err: an error on the server ("unknown") has prevented the request from succeeding (get pods kube-proxy-bootstrap-e2e-minion-group-hkv2): Jan 30 01:35:41.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.099609328s Jan 30 01:35:41.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:43.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:43.722: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.097804254s Jan 30 01:35:43.722: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:45.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:45.719: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.094882605s Jan 30 01:35:45.719: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:47.122: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:47.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.098851153s Jan 30 01:35:47.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:49.167: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:49.728: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.103144172s Jan 30 01:35:49.728: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:51.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:35:18 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:51.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.098793541s Jan 30 01:35:51.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:53.255: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 01:35:23 +0000 UTC}]. Failure Jan 30 01:35:53.726: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.10155029s Jan 30 01:35:53.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:55.298: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-bt6j metadata-proxy-v0.1-mrhx2] Jan 30 01:35:55.298: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mrhx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:35:55.299: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:35:55.342: INFO: Pod "metadata-proxy-v0.1-mrhx2": Phase="Running", Reason="", readiness=false. Elapsed: 43.923063ms Jan 30 01:35:55.342: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j": Phase="Running", Reason="", readiness=false. Elapsed: 43.630914ms Jan 30 01:35:55.342: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-mrhx2' on 'bootstrap-e2e-minion-group-bt6j' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:35:18 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:35:53 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:20 +0000 UTC }] Jan 30 01:35:55.342: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bt6j' on 'bootstrap-e2e-minion-group-bt6j' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:35:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:28:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:20 +0000 UTC }] Jan 30 01:35:55.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.098447768s Jan 30 01:35:55.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:57.395: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j": Phase="Running", Reason="", readiness=true. Elapsed: 2.096535323s Jan 30 01:35:57.395: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" satisfied condition "running and ready, or succeeded" Jan 30 01:35:57.399: INFO: Pod "metadata-proxy-v0.1-mrhx2": Phase="Running", Reason="", readiness=true. Elapsed: 2.10072075s Jan 30 01:35:57.399: INFO: Pod "metadata-proxy-v0.1-mrhx2" satisfied condition "running and ready, or succeeded" Jan 30 01:35:57.399: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-bt6j metadata-proxy-v0.1-mrhx2] Jan 30 01:35:57.399: INFO: Reboot successful on node bootstrap-e2e-minion-group-bt6j Jan 30 01:35:57.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.099714597s Jan 30 01:35:57.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:35:59.722: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.097949642s Jan 30 01:35:59.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:01.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.098347834s Jan 30 01:36:01.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:03.726: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.101013632s Jan 30 01:36:03.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:05.718: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.093596949s Jan 30 01:36:05.718: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:07.729: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.104273786s Jan 30 01:36:07.729: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:09.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.09863148s Jan 30 01:36:09.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:11.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.099019251s Jan 30 01:36:11.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:13.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.099459997s Jan 30 01:36:13.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:15.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.099266745s Jan 30 01:36:15.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:17.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.099359575s Jan 30 01:36:17.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:19.726: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.101225295s Jan 30 01:36:19.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:21.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.100128911s Jan 30 01:36:21.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:23.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.099313082s Jan 30 01:36:23.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:25.733: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.108110158s Jan 30 01:36:25.733: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:27.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.098961412s Jan 30 01:36:27.724: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:29.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.100899984s Jan 30 01:36:29.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:31.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.100009726s Jan 30 01:36:31.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:33.723: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.098913761s Jan 30 01:36:33.723: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:35.724: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.099934794s Jan 30 01:36:35.725: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:37.725: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.100947212s Jan 30 01:36:37.726: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:33:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:36:39.716: INFO: Encountered non-retryable error while getting pod kube-system/volume-snapshot-controller-0: Get "https://34.82.184.40/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0": dial tcp 34.82.184.40:443: connect: connection refused Jan 30 01:36:39.716: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 30 01:36:39.716: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-x6fsx] Jan 30 01:36:39.716: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 01:04:39 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 01:33:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 01:33:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 01:04:39 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.3.55 PodIPs:[{IP:10.64.3.55}] StartTime:2023-01-30 01:04:39 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 5m0s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(7029d163-353e-4569-b724-268397d21301),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-30 01:32:08 +0000 UTC,FinishedAt:2023-01-30 01:33:23 +0000 UTC,ContainerID:containerd://e2ca26a4d5a1bd42c345e1e8c8216aa72424aae9ef931bcacecffbca5d58637f,}} Ready:false RestartCount:15 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://e2ca26a4d5a1bd42c345e1e8c8216aa72424aae9ef931bcacecffbca5d58637f Started:0xc00228883f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 30 01:36:39.756: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://34.82.184.40/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 34.82.184.40:443: connect: connection refused: Jan 30 01:36:39.756: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://34.82.184.40/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 34.82.184.40:443: connect: connection refused: Jan 30 01:36:39.756: INFO: Node bootstrap-e2e-minion-group-dx3p failed reboot test. Jan 30 01:36:39.756: INFO: Node bootstrap-e2e-minion-group-hkv2 failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 01:36:39.756 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/30/23 01:36:39.756 (2m12.375s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 01:36:39.756 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/30/23 01:36:39.757 Jan 30 01:36:39.796: INFO: Unexpected error: <*url.Error | 0xc0053a0150>: { Op: "Get", URL: "https://34.82.184.40/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc0052800f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0023ffb90>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 184, 40], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001470040>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://34.82.184.40/api/v1/namespaces/kube-system/events": dial tcp 34.82.184.40:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/30/23 01:36:39.796 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 01:36:39.796 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 01:36:39.797 Jan 30 01:36:39.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 01:36:39.836 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 01:36:39.836 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 01:36:39.836 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 01:36:39.836 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 01:36:39.836 STEP: Collecting events from namespace "reboot-5766". - test/e2e/framework/debug/dump.go:42 @ 01/30/23 01:36:39.836 Jan 30 01:36:39.876: INFO: Unexpected error: failed to list events in namespace "reboot-5766": <*url.Error | 0xc00381b320>: { Op: "Get", URL: "https://34.82.184.40/api/v1/namespaces/reboot-5766/events", Err: <*net.OpError | 0xc005237360>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003950540>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 184, 40], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc005a2a500>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 01:36:39.876 (40ms) [FAILED] failed to list events in namespace "reboot-5766": Get "https://34.82.184.40/api/v1/namespaces/reboot-5766/events": dial tcp 34.82.184.40:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/30/23 01:36:39.876 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 01:36:39.876 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 01:36:39.876 STEP: Destroying namespace "reboot-5766" for this suite. - test/e2e/framework/framework.go:347 @ 01/30/23 01:36:39.876 [FAILED] Couldn't delete ns: "reboot-5766": Delete "https://34.82.184.40/api/v1/namespaces/reboot-5766": dial tcp 34.82.184.40:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.184.40/api/v1/namespaces/reboot-5766", Err:(*net.OpError)(0xc0053f2640)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/30/23 01:36:39.916 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 01:36:39.916 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 01:36:39.917 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 01:36:39.917 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 01:21:12.952from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 01:16:00.392 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 01:16:00.392 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 01:16:00.392 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/30/23 01:16:00.392 Jan 30 01:16:00.392: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/30/23 01:16:00.393 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/30/23 01:16:49.982 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/30/23 01:16:50.073 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 01:16:50.168 (49.776s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 01:16:50.168 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 01:16:50.168 (0s) > Enter [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/30/23 01:16:50.168 Jan 30 01:16:50.264: INFO: Getting bootstrap-e2e-minion-group-hkv2 Jan 30 01:16:50.264: INFO: Getting bootstrap-e2e-minion-group-bt6j Jan 30 01:16:50.264: INFO: Getting bootstrap-e2e-minion-group-dx3p Jan 30 01:16:50.344: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-dx3p condition Ready to be true Jan 30 01:16:50.344: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-hkv2 condition Ready to be true Jan 30 01:16:50.345: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-bt6j condition Ready to be true Jan 30 01:16:50.388: INFO: Node bootstrap-e2e-minion-group-bt6j has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-mrhx2 kube-proxy-bootstrap-e2e-minion-group-bt6j] Jan 30 01:16:50.388: INFO: Node bootstrap-e2e-minion-group-dx3p has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-x6fsx kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0] Jan 30 01:16:50.388: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-mrhx2 kube-proxy-bootstrap-e2e-minion-group-bt6j] Jan 30 01:16:50.388: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-x6fsx kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0] Jan 30 01:16:50.388: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:16:50.388: INFO: Node bootstrap-e2e-minion-group-hkv2 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:16:50.388: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:16:50.388: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:16:50.388: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-jc4vr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:16:50.388: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hkv2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:16:50.389: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mrhx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:16:50.389: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-x6fsx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:16:50.389: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-dx3p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:16:50.389: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-6t4zl" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:16:50.435: INFO: Pod "metadata-proxy-v0.1-6t4zl": Phase="Running", Reason="", readiness=true. Elapsed: 46.080525ms Jan 30 01:16:50.435: INFO: Pod "metadata-proxy-v0.1-6t4zl" satisfied condition "running and ready, or succeeded" Jan 30 01:16:50.437: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 48.502097ms Jan 30 01:16:50.437: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:50.438: INFO: Pod "metadata-proxy-v0.1-mrhx2": Phase="Running", Reason="", readiness=true. Elapsed: 49.831328ms Jan 30 01:16:50.438: INFO: Pod "metadata-proxy-v0.1-mrhx2" satisfied condition "running and ready, or succeeded" Jan 30 01:16:50.438: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dx3p": Phase="Running", Reason="", readiness=true. Elapsed: 49.795431ms Jan 30 01:16:50.438: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dx3p" satisfied condition "running and ready, or succeeded" Jan 30 01:16:50.438: INFO: Pod "metadata-proxy-v0.1-jc4vr": Phase="Running", Reason="", readiness=true. Elapsed: 50.028416ms Jan 30 01:16:50.438: INFO: Pod "metadata-proxy-v0.1-jc4vr" satisfied condition "running and ready, or succeeded" Jan 30 01:16:50.439: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.086329ms Jan 30 01:16:50.439: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:50.439: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=true. Elapsed: 50.109262ms Jan 30 01:16:50.439: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2" satisfied condition "running and ready, or succeeded" Jan 30 01:16:50.439: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j": Phase="Running", Reason="", readiness=true. Elapsed: 50.167468ms Jan 30 01:16:50.439: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" satisfied condition "running and ready, or succeeded" Jan 30 01:16:50.439: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:16:50.439: INFO: Getting external IP address for bootstrap-e2e-minion-group-hkv2 Jan 30 01:16:50.439: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-mrhx2 kube-proxy-bootstrap-e2e-minion-group-bt6j] Jan 30 01:16:50.439: INFO: Getting external IP address for bootstrap-e2e-minion-group-bt6j Jan 30 01:16:50.439: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-hkv2(34.82.9.96:22) Jan 30 01:16:50.439: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-bt6j(35.197.46.206:22) Jan 30 01:16:50.970: INFO: ssh prow@35.197.46.206:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 30 01:16:50.970: INFO: ssh prow@35.197.46.206:22: stdout: "" Jan 30 01:16:50.970: INFO: ssh prow@35.197.46.206:22: stderr: "" Jan 30 01:16:50.970: INFO: ssh prow@35.197.46.206:22: exit code: 0 Jan 30 01:16:50.970: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-bt6j condition Ready to be false Jan 30 01:16:50.984: INFO: ssh prow@34.82.9.96:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 30 01:16:50.984: INFO: ssh prow@34.82.9.96:22: stdout: "" Jan 30 01:16:50.984: INFO: ssh prow@34.82.9.96:22: stderr: "" Jan 30 01:16:50.984: INFO: ssh prow@34.82.9.96:22: exit code: 0 Jan 30 01:16:50.984: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-hkv2 condition Ready to be false Jan 30 01:16:51.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:51.027: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:52.480: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 2.091738142s Jan 30 01:16:52.480: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:52.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.094092041s Jan 30 01:16:52.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:53.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:53.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:54.479: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 4.090521028s Jan 30 01:16:54.479: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:54.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.09219779s Jan 30 01:16:54.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:55.102: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:55.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:56.481: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 6.092101306s Jan 30 01:16:56.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:56.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.093571746s Jan 30 01:16:56.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:57.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:57.158: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:58.479: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 8.09062271s Jan 30 01:16:58.479: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:58.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.092360655s Jan 30 01:16:58.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:59.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:59.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:00.479: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 10.090875906s Jan 30 01:17:00.480: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:00.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.092307973s Jan 30 01:17:00.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:01.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:01.246: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:02.480: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 12.091339616s Jan 30 01:17:02.480: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:02.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.094395408s Jan 30 01:17:02.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:03.276: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:03.289: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:04.479: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 14.090635927s Jan 30 01:17:04.479: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:04.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.092346626s Jan 30 01:17:04.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:05.319: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:05.331: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:06.479: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 16.090809066s Jan 30 01:17:06.479: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:06.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.092570102s Jan 30 01:17:06.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:07.388: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:07.388: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:08.479: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 18.090689778s Jan 30 01:17:08.479: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:08.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.092657486s Jan 30 01:17:08.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:09.432: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:09.432: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:10.479: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 20.09087698s Jan 30 01:17:10.479: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:10.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.093028757s Jan 30 01:17:10.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:11.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:11.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:12.480: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 22.091583166s Jan 30 01:17:12.480: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:12.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.093082921s Jan 30 01:17:12.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:13.521: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:13.521: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:14.481: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 24.092523775s Jan 30 01:17:14.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:14.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 24.094142942s Jan 30 01:17:14.483: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 30 01:17:15.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:15.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:16.479: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 26.090637618s Jan 30 01:17:16.479: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:17.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:17.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:18.480: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=true. Elapsed: 28.091579847s Jan 30 01:17:18.480: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx" satisfied condition "running and ready, or succeeded" Jan 30 01:17:18.480: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-x6fsx kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0] Jan 30 01:17:18.480: INFO: Getting external IP address for bootstrap-e2e-minion-group-dx3p Jan 30 01:17:18.480: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-dx3p(34.145.43.138:22) Jan 30 01:17:19.025: INFO: ssh prow@34.145.43.138:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 30 01:17:19.025: INFO: ssh prow@34.145.43.138:22: stdout: "" Jan 30 01:17:19.025: INFO: ssh prow@34.145.43.138:22: stderr: "" Jan 30 01:17:19.025: INFO: ssh prow@34.145.43.138:22: exit code: 0 Jan 30 01:17:19.025: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-dx3p condition Ready to be false Jan 30 01:17:19.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:19.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:19.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:21.112: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:21.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:21.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:23.156: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:23.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:23.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:25.199: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:25.790: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:25.790: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:27.242: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:27.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:27.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:29.286: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:29.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:29.881: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:31.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:31.924: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:31.924: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:33.372: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:33.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:33.969: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:35.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:36.012: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-bt6j condition Ready to be true Jan 30 01:17:36.012: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-hkv2 condition Ready to be true Jan 30 01:17:36.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:36.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:37.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:38.101: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:38.101: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:39.522: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:40.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:40.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:41.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:42.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:42.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:43.610: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:44.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:44.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:45.653: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:46.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:46.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:47.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:48.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:48.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:49.743: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:50.370: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:50.370: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:51.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:52.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:52.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:53.829: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:54.460: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:54.460: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:55.873: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:56.504: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:56.504: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:57.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:58.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:58.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:59.961: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:00.594: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:00.594: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:02.003: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:02.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:02.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:04.065: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:04.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:04.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:06.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:06.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:06.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:08.152: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:08.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:08.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:10.194: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:10.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:10.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:12.238: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:12.862: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:12.862: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:14.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:14.906: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:14.906: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:16.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:16.950: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:16.950: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:18.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:19.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:19.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:20.412: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:21.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:21.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:22.455: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:23.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:23.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:24.499: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:25.160: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:25.160: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:26.542: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:27.205: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:27.205: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:28.591: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:29.249: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:29.249: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:30.634: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:31.294: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:31.294: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:32.678: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:33.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:33.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:34.721: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:35.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:35.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:36.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:37.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:37.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:38.808: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:39.473: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:39.473: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:40.854: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:41.518: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:41.518: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:42.898: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:43.563: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:43.563: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:44.940: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:45.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:45.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:46.987: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:47.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:47.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:49.031: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:49.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:49.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:51.075: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:51.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:51.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:53.119: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:53.784: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:53.785: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:55.163: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:55.829: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:55.829: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:57.205: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:57.873: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:57.873: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:59.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:59.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:59.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:01.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:19:01.962: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:01.962: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:03.335: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:19:04.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:04.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:05.378: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:19:06.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:06.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:07.421: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:19:08.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:08.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:09.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:19:10.159: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:10.159: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:11.508: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:19:12.203: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:12.203: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:13.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:19:14.249: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:14.249: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:15.595: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:19:16.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:16.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:17.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:19:18.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:18.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:19.642: INFO: Node bootstrap-e2e-minion-group-dx3p didn't reach desired Ready condition status (false) within 2m0s Jan 30 01:19:20.382: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:20.382: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:22.427: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:22.427: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:24.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:24.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:26.515: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:26.515: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:28.559: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:28.559: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:30.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:30.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:32.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:32.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:34.693: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:34.693: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:36.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:36.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:38.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:38.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:40.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:40.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:42.874: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:42.874: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:44.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:44.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:46.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:46.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:49.028: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:49.028: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:51.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:51.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:53.117: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:53.117: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:55.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:55.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:57.208: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:57.208: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:19:59.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:19:59.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:01.298: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:01.298: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:03.343: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:03.343: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:05.418: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:05.418: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:07.464: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:07.464: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:09.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:09.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:11.553: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:11.553: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:13.597: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:13.597: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:15.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:15.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:17.687: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:17.687: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:19.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:19.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:21.776: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:21.776: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:23.820: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:23.820: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:25.864: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:25.864: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:27.909: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:27.909: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:29.954: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:29.954: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:31.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:31.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:34.043: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:34.043: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:36.088: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:36.088: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:38.138: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:38.138: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:40.182: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:40.182: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:42.227: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:42.227: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:44.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:44.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:46.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:46.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:48.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:48.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:50.406: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:50.406: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:52.453: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:52.453: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:54.498: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:54.498: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:56.542: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:20:56.542: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:58.591: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:20:58.591: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:21:00.635: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:21:00.635: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:21:02.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:21:02.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:21:04.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:21:04.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:21:06.777: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:21:06.777: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:21:08.821: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:21:08.821: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-jc4vr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:21:08.821: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 01:21:07 +0000 UTC}]. Failure Jan 30 01:21:08.821: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hkv2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:21:08.865: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=false. Elapsed: 43.63571ms Jan 30 01:21:08.865: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hkv2' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:17:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:21:08.865: INFO: Pod "metadata-proxy-v0.1-jc4vr": Phase="Running", Reason="", readiness=false. Elapsed: 44.097264ms Jan 30 01:21:08.865: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jc4vr' on 'bootstrap-e2e-minion-group-hkv2' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:17:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:14:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:23 +0000 UTC }] Jan 30 01:21:10.865: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 01:21:07 +0000 UTC}]. Failure Jan 30 01:21:10.914: INFO: Pod "metadata-proxy-v0.1-jc4vr": Phase="Running", Reason="", readiness=true. Elapsed: 2.09301215s Jan 30 01:21:10.914: INFO: Pod "metadata-proxy-v0.1-jc4vr" satisfied condition "running and ready, or succeeded" Jan 30 01:21:10.916: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=true. Elapsed: 2.095094338s Jan 30 01:21:10.916: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2" satisfied condition "running and ready, or succeeded" Jan 30 01:21:10.916: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:21:10.916: INFO: Reboot successful on node bootstrap-e2e-minion-group-hkv2 Jan 30 01:21:12.908: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-mrhx2 kube-proxy-bootstrap-e2e-minion-group-bt6j] Jan 30 01:21:12.908: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:21:12.908: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mrhx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:21:12.952: INFO: Pod "metadata-proxy-v0.1-mrhx2": Phase="Running", Reason="", readiness=true. Elapsed: 43.830534ms Jan 30 01:21:12.952: INFO: Pod "metadata-proxy-v0.1-mrhx2" satisfied condition "running and ready, or succeeded" Jan 30 01:21:12.952: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j": Phase="Running", Reason="", readiness=true. Elapsed: 44.119521ms Jan 30 01:21:12.952: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" satisfied condition "running and ready, or succeeded" Jan 30 01:21:12.952: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-mrhx2 kube-proxy-bootstrap-e2e-minion-group-bt6j] Jan 30 01:21:12.952: INFO: Reboot successful on node bootstrap-e2e-minion-group-bt6j Jan 30 01:21:12.952: INFO: Node bootstrap-e2e-minion-group-dx3p failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 01:21:12.952 < Exit [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/30/23 01:21:12.952 (4m22.784s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 01:21:12.952 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/30/23 01:21:12.952 Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-ftgx9 to bootstrap-e2e-minion-group-hkv2 Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.002631704s (1.002651112s including waiting) Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container coredns Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container coredns Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": dial tcp 10.64.2.3:8181: connect: connection refused Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container coredns Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Readiness probe failed: Get "http://10.64.2.5:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Liveness probe failed: Get "http://10.64.2.5:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container coredns Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container coredns Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container coredns Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-ftgx9: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container coredns Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-wfgss to bootstrap-e2e-minion-group-dx3p Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.312534938s (1.312544829s including waiting) Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container coredns Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container coredns Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container coredns Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Readiness probe failed: Get "http://10.64.3.19:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "http://10.64.3.19:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-wfgss_kube-system(fd7e5efb-e6c8-4618-8180-372906aca7b7) Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-wfgss Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container coredns Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container coredns Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container coredns Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Readiness probe failed: Get "http://10.64.3.28:8181/ready": dial tcp 10.64.3.28:8181: connect: connection refused Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-wfgss_kube-system(fd7e5efb-e6c8-4618-8180-372906aca7b7) Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f-wfgss: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-wfgss Jan 30 01:21:13.007: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-ftgx9 Jan 30 01:21:13.007: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 30 01:21:13.007: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 30 01:21:13.007: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 01:21:13.007: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 01:21:13.007: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 01:21:13.007: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 30 01:21:13.007: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 01:21:13.007: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 30 01:21:13.007: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_4cdf3 became leader Jan 30 01:21:13.007: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ac338 became leader Jan 30 01:21:13.007: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ce918 became leader Jan 30 01:21:13.007: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_95a0e became leader Jan 30 01:21:13.007: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b698e became leader Jan 30 01:21:13.007: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_c9f6e became leader Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-8dmqc to bootstrap-e2e-minion-group-dx3p Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.553029522s (1.553048635s including waiting) Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "http://10.64.3.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Failed: Error: failed to get sandbox container task: no running task found: task 86cfa70222386362fc21e6e023af3c49885ce70bddff79db189147c2227c0263 not found: not found Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-8dmqc_kube-system(a86afb6b-ee26-4ee2-9404-ff14a1aeed70) Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "http://10.64.3.22:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-8dmqc_kube-system(a86afb6b-ee26-4ee2-9404-ff14a1aeed70) Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-8dmqc: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-9j2sg to bootstrap-e2e-minion-group-bt6j Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 656.707165ms (656.714689ms including waiting) Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9j2sg_kube-system(5f7283c4-d762-4a76-9256-c7f2436df7b8) Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Failed: Error: failed to get sandbox container task: no running task found: task 9a140d170ea34dd325d74d04502b642d66d48dc918c508b31dfb8ef904c34432 not found: not found Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9j2sg: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-9psf2 to bootstrap-e2e-minion-group-hkv2 Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 657.037846ms (657.0539ms including waiting) Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Liveness probe failed: Get "http://10.64.2.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Failed: Error: failed to get sandbox container task: no running task found: task 407f4fd26023877d10eebda20a4d5c9df500dcd16aae590846edc1a34c8af1f5 not found: not found Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9psf2_kube-system(67a256ba-75bf-455f-b0c8-cf102cff2423) Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Unhealthy: Liveness probe failed: Get "http://10.64.2.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent-9psf2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container konnectivity-agent Jan 30 01:21:13.007: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-8dmqc Jan 30 01:21:13.007: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-9j2sg Jan 30 01:21:13.007: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-9psf2 Jan 30 01:21:13.007: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 30 01:21:13.007: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 30 01:21:13.007: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 30 01:21:13.007: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 30 01:21:13.007: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 30 01:21:13.007: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 01:21:13.007: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:21:13.007: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 30 01:21:13.007: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 30 01:21:13.007: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(548d8a4d412ea624192633f425ca8149) Jan 30 01:21:13.007: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_be709c60-2a3f-4849-ab64-ecee40b17104 became leader Jan 30 01:21:13.007: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_0d612d5f-2e3f-4a93-a500-bf3745a493f8 became leader Jan 30 01:21:13.007: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_de6ee059-96ba-4c3d-bfde-95cf9e7419b1 became leader Jan 30 01:21:13.007: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_981f6359-bd6e-4ea5-8b97-1399424ecde9 became leader Jan 30 01:21:13.007: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_e45780c5-7c56-43d2-bee3-3b5de7a3ce4e became leader Jan 30 01:21:13.007: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_38a2ca4d-d586-469a-9141-6e91cfbf3c0e became leader Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-x6fsx to bootstrap-e2e-minion-group-dx3p Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 2.673140332s (2.673153883s including waiting) Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container autoscaler Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container autoscaler Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container autoscaler Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-x6fsx_kube-system(316ca4a7-6c99-481e-a0ff-1766a6a888be) Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-x6fsx Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container autoscaler Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container autoscaler Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container autoscaler Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-x6fsx_kube-system(316ca4a7-6c99-481e-a0ff-1766a6a888be) Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container autoscaler Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985-x6fsx: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container autoscaler Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-x6fsx Jan 30 01:21:13.007: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-bt6j_kube-system(6671c8c6e4e16a3c254833ebe19049da) Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-bt6j_kube-system(6671c8c6e4e16a3c254833ebe19049da) Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bt6j: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-dx3p_kube-system(cdadd6623acbd4ce0baf8d2112f24c5c) Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dx3p: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hkv2_kube-system(9c65fc331fb8e465e8ca146aedb85821) Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Killing: Stopping container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hkv2_kube-system(9c65fc331fb8e465e8ca146aedb85821) Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hkv2: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container kube-proxy Jan 30 01:21:13.007: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5" already present on machine Jan 30 01:21:13.007: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 30 01:21:13.007: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 30 01:21:13.007: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(ecb5a5dcd22e71f77775e7d311196ff2) Jan 30 01:21:13.007: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 30 01:21:13.007: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_46660e88-d124-4363-8950-417bf47fc5ec became leader Jan 30 01:21:13.007: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_4963297b-5fc0-4e05-bc1c-8c1650a00819 became leader Jan 30 01:21:13.007: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_afbdfa26-5274-47a0-9831-809769f20f6c became leader Jan 30 01:21:13.007: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ab8bcd4c-da94-4a00-bdb9-4647d5d24710 became leader Jan 30 01:21:13.007: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_f794b2d6-a10a-4d64-bc1d-5b73f901cfdf became leader Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-9cjjm to bootstrap-e2e-minion-group-dx3p Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 591.563236ms (591.574604ms including waiting) Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container default-http-backend Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container default-http-backend Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-9cjjm Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container default-http-backend Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container default-http-backend Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99-9cjjm: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container default-http-backend Jan 30 01:21:13.007: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-9cjjm Jan 30 01:21:13.007: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 30 01:21:13.007: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 30 01:21:13.007: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 30 01:21:13.007: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 30 01:21:13.007: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 30 01:21:13.007: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://10.138.0.2:8086/healthz": dial tcp 10.138.0.2:8086: connect: connection refused Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-6t4zl to bootstrap-e2e-minion-group-dx3p Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 879.557706ms (879.569889ms including waiting) Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.047520415s (2.047529815s including waiting) Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-6t4zl: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-jc4vr to bootstrap-e2e-minion-group-hkv2 Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 780.744165ms (780.763471ms including waiting) Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.829700722s (1.829716599s including waiting) Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Created: Created container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-jc4vr: {kubelet bootstrap-e2e-minion-group-hkv2} Started: Started container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-mrhx2 to bootstrap-e2e-minion-group-bt6j Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 714.937195ms (714.950733ms including waiting) Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.784882746s (1.784891187s including waiting) Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-mrhx2: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-qndlb: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-qndlb to bootstrap-e2e-master Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 800.897673ms (800.905188ms including waiting) Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.422135272s (2.422143281s including waiting) Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1-qndlb: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-mrhx2 Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-qndlb Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-jc4vr Jan 30 01:21:13.007: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-6t4zl Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-6btrg to bootstrap-e2e-minion-group-dx3p Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.86733657s (2.867346928s including waiting) Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container metrics-server Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container metrics-server Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.963636058s (3.963650732s including waiting) Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container metrics-server-nanny Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container metrics-server-nanny Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container metrics-server Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container metrics-server-nanny Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Readiness probe failed: Get "https://10.64.3.9:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c-6btrg: {kubelet bootstrap-e2e-minion-group-dx3p} Unhealthy: Liveness probe failed: Get "https://10.64.3.9:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-6btrg Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-6btrg Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-jpr66 to bootstrap-e2e-minion-group-bt6j Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.300742334s (1.300752166s including waiting) Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metrics-server Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 960.128176ms (960.148415ms including waiting) Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server-nanny Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metrics-server-nanny Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": dial tcp 10.64.0.3:10250: connect: connection refused Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": dial tcp 10.64.0.3:10250: connect: connection refused Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Liveness probe failed: HTTP probe failed with statuscode: 500 Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container metrics-server Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Stopping container metrics-server-nanny Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metrics-server Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server-nanny Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Started: Started container metrics-server-nanny Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.10:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Liveness probe failed: Get "https://10.64.0.10:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Killing: Container metrics-server failed liveness probe, will be restarted Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.10:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Unhealthy: Readiness probe failed: Get "https://10.64.0.10:10250/readyz": dial tcp 10.64.0.10:10250: connect: connection refused Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-jpr66 Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9-jpr66: {kubelet bootstrap-e2e-minion-group-bt6j} Created: Created container metrics-server Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-jpr66 Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 30 01:21:13.007: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-dx3p Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.442535419s (2.442561914s including waiting) Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container volume-snapshot-controller Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container volume-snapshot-controller Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container volume-snapshot-controller Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(7029d163-353e-4569-b724-268397d21301) Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container volume-snapshot-controller Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container volume-snapshot-controller Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container volume-snapshot-controller Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(7029d163-353e-4569-b724-268397d21301) Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Created: Created container volume-snapshot-controller Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Started: Started container volume-snapshot-controller Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-dx3p} Killing: Stopping container volume-snapshot-controller Jan 30 01:21:13.007: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 01:21:13.007 (55ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 01:21:13.007 Jan 30 01:21:13.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 01:21:13.051 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 01:21:13.051 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 01:21:13.051 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 01:21:13.051 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 01:21:13.051 STEP: Collecting events from namespace "reboot-3304". - test/e2e/framework/debug/dump.go:42 @ 01/30/23 01:21:13.051 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/30/23 01:21:13.092 Jan 30 01:21:13.134: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 01:21:13.134: INFO: Jan 30 01:21:13.177: INFO: Logging node info for node bootstrap-e2e-master Jan 30 01:21:13.220: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 7a2bd2df-fc42-4d55-8404-5b2a0412e072 2258 0 2023-01-30 01:04:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 01:04:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-30 01:04:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-30 01:04:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 01:21:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-upgrade/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 01:04:39 +0000 UTC,LastTransitionTime:2023-01-30 01:04:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 01:21:04 +0000 UTC,LastTransitionTime:2023-01-30 01:04:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 01:21:04 +0000 UTC,LastTransitionTime:2023-01-30 01:04:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 01:21:04 +0000 UTC,LastTransitionTime:2023-01-30 01:04:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 01:21:04 +0000 UTC,LastTransitionTime:2023-01-30 01:04:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.82.184.40,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gce-upgrade.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gce-upgrade.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5736e6f149167618f71cd530dafef4cc,SystemUUID:5736e6f1-4916-7618-f71c-d530dafef4cc,BootID:fe689329-330a-4af4-8223-73b99031148e,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,KubeProxyVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:135961043,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:125279031,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:57551672,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 01:21:13.220: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 30 01:21:13.267: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 30 01:21:13.328: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-30 01:03:55 +0000 UTC (0+1 container statuses recorded) Jan 30 01:21:13.328: INFO: Container l7-lb-controller ready: true, restart count 7 Jan 30 01:21:13.328: INFO: metadata-proxy-v0.1-qndlb started at 2023-01-30 01:04:22 +0000 UTC (0+2 container statuses recorded) Jan 30 01:21:13.328: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 01:21:13.328: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 01:21:13.328: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:21:13.328: INFO: Container etcd-container ready: true, restart count 4 Jan 30 01:21:13.328: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:21:13.328: INFO: Container konnectivity-server-container ready: true, restart count 0 Jan 30 01:21:13.328: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:21:13.328: INFO: Container kube-scheduler ready: false, restart count 4 Jan 30 01:21:13.328: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-30 01:03:55 +0000 UTC (0+1 container statuses recorded) Jan 30 01:21:13.328: INFO: Container kube-addon-manager ready: true, restart count 3 Jan 30 01:21:13.328: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:21:13.328: INFO: Container etcd-container ready: true, restart count 0 Jan 30 01:21:13.328: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:21:13.328: INFO: Container kube-apiserver ready: true, restart count 0 Jan 30 01:21:13.328: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-30 01:03:35 +0000 UTC (0+1 container statuses recorded) Jan 30 01:21:13.328: INFO: Container kube-controller-manager ready: true, restart count 6 Jan 30 01:21:13.532: INFO: Latency metrics for node bootstrap-e2e-master Jan 30 01:21:13.532: INFO: Logging node info for node bootstrap-e2e-minion-group-bt6j Jan 30 01:21:13.575: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-bt6j efad890a-089b-40bf-b3d0-1106dec194f4 2341 0 2023-01-30 01:04:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-bt6j kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 01:04:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 01:17:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-30 01:18:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 01:21:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 01:21:07 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-upgrade/us-west1-b/bootstrap-e2e-minion-group-bt6j,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 01:18:09 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 01:18:09 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 01:18:09 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 01:18:09 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 01:18:09 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 01:18:09 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 01:18:09 +0000 UTC,LastTransitionTime:2023-01-30 01:18:08 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 01:04:39 +0000 UTC,LastTransitionTime:2023-01-30 01:04:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 01:21:07 +0000 UTC,LastTransitionTime:2023-01-30 01:21:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 01:21:07 +0000 UTC,LastTransitionTime:2023-01-30 01:21:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 01:21:07 +0000 UTC,LastTransitionTime:2023-01-30 01:21:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 01:21:07 +0000 UTC,LastTransitionTime:2023-01-30 01:21:07 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.46.206,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-bt6j.c.k8s-jkns-gce-upgrade.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-bt6j.c.k8s-jkns-gce-upgrade.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f7ade72fddb4327ba8b5c5a9c07f04c,SystemUUID:3f7ade72-fddb-4327-ba8b-5c5a9c07f04c,BootID:e145d8d8-8bdd-40a3-b85d-a02004edfa80,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,KubeProxyVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 01:21:13.575: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-bt6j Jan 30 01:21:13.623: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-bt6j Jan 30 01:21:13.683: INFO: kube-proxy-bootstrap-e2e-minion-group-bt6j started at 2023-01-30 01:04:20 +0000 UTC (0+1 container statuses recorded) Jan 30 01:21:13.683: INFO: Container kube-proxy ready: true, restart count 5 Jan 30 01:21:13.683: INFO: metadata-proxy-v0.1-mrhx2 started at 2023-01-30 01:04:21 +0000 UTC (0+2 container statuses recorded) Jan 30 01:21:13.683: INFO: Container metadata-proxy ready: true, restart count 2 Jan 30 01:21:13.683: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 30 01:21:13.683: INFO: konnectivity-agent-9j2sg started at 2023-01-30 01:04:40 +0000 UTC (0+1 container statuses recorded) Jan 30 01:21:13.683: INFO: Container konnectivity-agent ready: true, restart count 5 Jan 30 01:21:13.683: INFO: metrics-server-v0.5.2-867b8754b9-jpr66 started at 2023-01-30 01:05:48 +0000 UTC (0+2 container statuses recorded) Jan 30 01:21:13.683: INFO: Container metrics-server ready: false, restart count 6 Jan 30 01:21:13.683: INFO: Container metrics-server-nanny ready: false, restart count 4 Jan 30 01:21:17.599: INFO: Latency metrics for node bootstrap-e2e-minion-group-bt6j Jan 30 01:21:17.599: INFO: Logging node info for node bootstrap-e2e-minion-group-dx3p Jan 30 01:21:17.643: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-dx3p 97ee0a06-78c0-423b-b6ac-5763006307f0 2284 0 2023-01-30 01:04:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-dx3p kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 01:04:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 01:13:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-30 01:18:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 01:21:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 01:21:07 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-upgrade/us-west1-b/bootstrap-e2e-minion-group-dx3p,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 01:18:40 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 01:18:40 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 01:18:40 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 01:18:40 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 01:18:40 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 01:18:40 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 01:18:40 +0000 UTC,LastTransitionTime:2023-01-30 01:18:39 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 01:04:39 +0000 UTC,LastTransitionTime:2023-01-30 01:04:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 01:21:07 +0000 UTC,LastTransitionTime:2023-01-30 01:14:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 01:21:07 +0000 UTC,LastTransitionTime:2023-01-30 01:14:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 01:21:07 +0000 UTC,LastTransitionTime:2023-01-30 01:14:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 01:21:07 +0000 UTC,LastTransitionTime:2023-01-30 01:21:07 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.145.43.138,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-dx3p.c.k8s-jkns-gce-upgrade.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-dx3p.c.k8s-jkns-gce-upgrade.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:04cfc971fb8b0e96ce2e62a783445108,SystemUUID:04cfc971-fb8b-0e96-ce2e-62a783445108,BootID:9b9d29fb-6452-40ca-80e4-4ded665f8322,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,KubeProxyVersion:v1.27.0-alpha.1.76+5bb7326c3643f5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.76_5bb7326c3643f5],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 01:21:17.643: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-dx3p Jan 30 01:21:17.691: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-dx3p Jan 30 01:21:17.808: INFO: metadata-proxy-v0.1-6t4zl started at 2023-01-30 01:04:28 +0000 UTC (0+2 container statuses recorded) Jan 30 01:21:17.808: INFO: Container metadata-proxy ready: true, restart count 2 Jan 30 01:21:17.808: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 30 01:21:17.808: INFO: konnectivity-agent-8dmqc started at 2023-01-30 01:04:39 +0000 UTC (0+1 container statuses recorded) Jan 30 01:21:17.808: INFO: Container konnectivity-agent ready: false, restart count 6 Jan 30 01:21:17.808: INFO: kube-proxy-bootstrap-e2e-minion-group-dx3p started at 2023-01-30 01:04:27 +0000 UTC (0+1 container statuses recorded) Jan 30 01:21:17.808: INFO: Container kube-proxy ready: true, restart count 5 Jan 30 01:21:17.808: INFO: l7-default-backend-8549d69d99-9cjjm started at 2023-01-30 01:04:39 +0000 UTC (0+1 container statuses recorded) Jan 30 01:21:17.808: INFO: Container default-http-backend ready: false, restart count 2 Jan 30 01:21:17.808: INFO: volume-snapshot-controller-0 started at 2023-01-30 01:04:39 +0000 UTC (0+1 container statuses recorded) Jan 30 01:21:17.808: INFO: Container volume-snapshot-controller ready: false, restart count 10 Jan 30 01:21:17.808: INFO: coredns-6846b5b5f-wfgss started at 2023-01-30 01:04:39 +0000 UTC (0+1 container statuses recorded) Jan 30 01:21:17.808: INFO: Container coredns ready: false, restart count 6 Jan 30 01:21:17.808: INFO: kube-dns-autoscaler-5f6455f985-x6fsx started at 2023-01-30 01:04:39 +0000 UTC (0+1 container statuses recorded) Jan 30 01:21:17.808: INFO: Container autoscaler ready: false, restart count 7 Jan 30 01:21:53.508: INFO: Latency metrics for node bootstrap-e2e-minion-group-dx3p Jan 30 01:21:53.508: INFO: Logging node info for node bootstrap-e2e-minion-group-hkv2 Jan 30 01:22:53.551: INFO: Error getting node info Get "https://34.82.184.40/api/v1/nodes/bootstrap-e2e-minion-group-hkv2": stream error: stream ID 2493; INTERNAL_ERROR; received from peer Jan 30 01:22:53.551: INFO: Node Info: &Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 01:22:53.551: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-hkv2 Jan 30 01:22:53.598: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-hkv2 Jan 30 01:23:38.282: INFO: kube-proxy-bootstrap-e2e-minion-group-hkv2 started at 2023-01-30 01:04:23 +0000 UTC (0+1 container statuses recorded) Jan 30 01:23:38.282: INFO: Container kube-proxy ready: false, restart count 9 Jan 30 01:23:38.282: INFO: metadata-proxy-v0.1-jc4vr started at 2023-01-30 01:04:24 +0000 UTC (0+2 container statuses recorded) Jan 30 01:23:38.282: INFO: Container metadata-proxy ready: true, restart count 2 Jan 30 01:23:38.282: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 30 01:23:38.282: INFO: konnectivity-agent-9psf2 started at 2023-01-30 01:04:40 +0000 UTC (0+1 container statuses recorded) Jan 30 01:23:38.282: INFO: Container konnectivity-agent ready: true, restart count 4 Jan 30 01:23:38.282: INFO: coredns-6846b5b5f-ftgx9 started at 2023-01-30 01:04:47 +0000 UTC (0+1 container statuses recorded) Jan 30 01:23:38.282: INFO: Container coredns ready: true, restart count 4 Jan 30 01:23:38.448: INFO: Latency metrics for node bootstrap-e2e-minion-group-hkv2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 01:23:38.448 (2m25.397s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 01:23:38.448 (2m25.398s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 01:23:38.448 STEP: Destroying namespace "reboot-3304" for this suite. - test/e2e/framework/framework.go:347 @ 01/30/23 01:23:38.448 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 01:23:38.493 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 01:23:38.493 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 01:23:38.493 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 01:21:12.952
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 01:16:00.392 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 01:16:00.392 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 01:16:00.392 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/30/23 01:16:00.392 Jan 30 01:16:00.392: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/30/23 01:16:00.393 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/30/23 01:16:49.982 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/30/23 01:16:50.073 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 01:16:50.168 (49.776s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 01:16:50.168 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 01:16:50.168 (0s) > Enter [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/30/23 01:16:50.168 Jan 30 01:16:50.264: INFO: Getting bootstrap-e2e-minion-group-hkv2 Jan 30 01:16:50.264: INFO: Getting bootstrap-e2e-minion-group-bt6j Jan 30 01:16:50.264: INFO: Getting bootstrap-e2e-minion-group-dx3p Jan 30 01:16:50.344: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-dx3p condition Ready to be true Jan 30 01:16:50.344: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-hkv2 condition Ready to be true Jan 30 01:16:50.345: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-bt6j condition Ready to be true Jan 30 01:16:50.388: INFO: Node bootstrap-e2e-minion-group-bt6j has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-mrhx2 kube-proxy-bootstrap-e2e-minion-group-bt6j] Jan 30 01:16:50.388: INFO: Node bootstrap-e2e-minion-group-dx3p has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-x6fsx kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0] Jan 30 01:16:50.388: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-mrhx2 kube-proxy-bootstrap-e2e-minion-group-bt6j] Jan 30 01:16:50.388: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-x6fsx kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0] Jan 30 01:16:50.388: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:16:50.388: INFO: Node bootstrap-e2e-minion-group-hkv2 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:16:50.388: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:16:50.388: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:16:50.388: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-jc4vr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:16:50.388: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hkv2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:16:50.389: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mrhx2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:16:50.389: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-x6fsx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:16:50.389: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-dx3p" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:16:50.389: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-6t4zl" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 01:16:50.435: INFO: Pod "metadata-proxy-v0.1-6t4zl": Phase="Running", Reason="", readiness=true. Elapsed: 46.080525ms Jan 30 01:16:50.435: INFO: Pod "metadata-proxy-v0.1-6t4zl" satisfied condition "running and ready, or succeeded" Jan 30 01:16:50.437: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 48.502097ms Jan 30 01:16:50.437: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:50.438: INFO: Pod "metadata-proxy-v0.1-mrhx2": Phase="Running", Reason="", readiness=true. Elapsed: 49.831328ms Jan 30 01:16:50.438: INFO: Pod "metadata-proxy-v0.1-mrhx2" satisfied condition "running and ready, or succeeded" Jan 30 01:16:50.438: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dx3p": Phase="Running", Reason="", readiness=true. Elapsed: 49.795431ms Jan 30 01:16:50.438: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dx3p" satisfied condition "running and ready, or succeeded" Jan 30 01:16:50.438: INFO: Pod "metadata-proxy-v0.1-jc4vr": Phase="Running", Reason="", readiness=true. Elapsed: 50.028416ms Jan 30 01:16:50.438: INFO: Pod "metadata-proxy-v0.1-jc4vr" satisfied condition "running and ready, or succeeded" Jan 30 01:16:50.439: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.086329ms Jan 30 01:16:50.439: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:50.439: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2": Phase="Running", Reason="", readiness=true. Elapsed: 50.109262ms Jan 30 01:16:50.439: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hkv2" satisfied condition "running and ready, or succeeded" Jan 30 01:16:50.439: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j": Phase="Running", Reason="", readiness=true. Elapsed: 50.167468ms Jan 30 01:16:50.439: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bt6j" satisfied condition "running and ready, or succeeded" Jan 30 01:16:50.439: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-hkv2 metadata-proxy-v0.1-jc4vr] Jan 30 01:16:50.439: INFO: Getting external IP address for bootstrap-e2e-minion-group-hkv2 Jan 30 01:16:50.439: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-mrhx2 kube-proxy-bootstrap-e2e-minion-group-bt6j] Jan 30 01:16:50.439: INFO: Getting external IP address for bootstrap-e2e-minion-group-bt6j Jan 30 01:16:50.439: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-hkv2(34.82.9.96:22) Jan 30 01:16:50.439: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-bt6j(35.197.46.206:22) Jan 30 01:16:50.970: INFO: ssh prow@35.197.46.206:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 30 01:16:50.970: INFO: ssh prow@35.197.46.206:22: stdout: "" Jan 30 01:16:50.970: INFO: ssh prow@35.197.46.206:22: stderr: "" Jan 30 01:16:50.970: INFO: ssh prow@35.197.46.206:22: exit code: 0 Jan 30 01:16:50.970: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-bt6j condition Ready to be false Jan 30 01:16:50.984: INFO: ssh prow@34.82.9.96:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 30 01:16:50.984: INFO: ssh prow@34.82.9.96:22: stdout: "" Jan 30 01:16:50.984: INFO: ssh prow@34.82.9.96:22: stderr: "" Jan 30 01:16:50.984: INFO: ssh prow@34.82.9.96:22: exit code: 0 Jan 30 01:16:50.984: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-hkv2 condition Ready to be false Jan 30 01:16:51.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:51.027: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:52.480: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 2.091738142s Jan 30 01:16:52.480: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:52.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.094092041s Jan 30 01:16:52.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:53.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:53.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:54.479: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 4.090521028s Jan 30 01:16:54.479: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:54.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.09219779s Jan 30 01:16:54.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:55.102: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:55.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:56.481: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 6.092101306s Jan 30 01:16:56.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:56.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.093571746s Jan 30 01:16:56.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:57.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:57.158: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:58.479: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 8.09062271s Jan 30 01:16:58.479: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:58.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.092360655s Jan 30 01:16:58.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:16:59.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:16:59.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:00.479: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 10.090875906s Jan 30 01:17:00.480: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:00.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.092307973s Jan 30 01:17:00.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:01.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:01.246: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:02.480: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 12.091339616s Jan 30 01:17:02.480: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:02.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.094395408s Jan 30 01:17:02.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:03.276: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:03.289: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:04.479: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 14.090635927s Jan 30 01:17:04.479: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:04.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.092346626s Jan 30 01:17:04.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:05.319: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:05.331: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:06.479: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 16.090809066s Jan 30 01:17:06.479: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:06.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.092570102s Jan 30 01:17:06.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:07.388: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:07.388: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:08.479: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 18.090689778s Jan 30 01:17:08.479: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:08.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.092657486s Jan 30 01:17:08.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:09.432: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:09.432: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:10.479: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 20.09087698s Jan 30 01:17:10.479: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:10.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.093028757s Jan 30 01:17:10.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:11.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:11.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:12.480: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 22.091583166s Jan 30 01:17:12.480: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:12.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.093082921s Jan 30 01:17:12.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:13.521: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:13.521: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:14.481: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 24.092523775s Jan 30 01:17:14.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:14.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 24.094142942s Jan 30 01:17:14.483: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 30 01:17:15.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:15.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:16.479: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=false. Elapsed: 26.090637618s Jan 30 01:17:16.479: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-x6fsx' on 'bootstrap-e2e-minion-group-dx3p' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:16:49 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 01:04:39 +0000 UTC }] Jan 30 01:17:17.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:17.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:18.480: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx": Phase="Running", Reason="", readiness=true. Elapsed: 28.091579847s Jan 30 01:17:18.480: INFO: Pod "kube-dns-autoscaler-5f6455f985-x6fsx" satisfied condition "running and ready, or succeeded" Jan 30 01:17:18.480: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-x6fsx kube-proxy-bootstrap-e2e-minion-group-dx3p metadata-proxy-v0.1-6t4zl volume-snapshot-controller-0] Jan 30 01:17:18.480: INFO: Getting external IP address for bootstrap-e2e-minion-group-dx3p Jan 30 01:17:18.480: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-dx3p(34.145.43.138:22) Jan 30 01:17:19.025: INFO: ssh prow@34.145.43.138:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 30 01:17:19.025: INFO: ssh prow@34.145.43.138:22: stdout: "" Jan 30 01:17:19.025: INFO: ssh prow@34.145.43.138:22: stderr: "" Jan 30 01:17:19.025: INFO: ssh prow@34.145.43.138:22: exit code: 0 Jan 30 01:17:19.025: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-dx3p condition Ready to be false Jan 30 01:17:19.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:19.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:19.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:21.112: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:21.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:21.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:23.156: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:23.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:23.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:25.199: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:25.790: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:25.790: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:27.242: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:27.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:27.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:29.286: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:29.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:29.881: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:31.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:31.924: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:31.924: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:33.372: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:33.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:33.969: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:35.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:36.012: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-bt6j condition Ready to be true Jan 30 01:17:36.012: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-hkv2 condition Ready to be true Jan 30 01:17:36.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:36.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:37.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:38.101: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:38.101: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:39.522: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:40.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:40.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:41.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:42.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:42.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:43.610: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:44.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:44.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:45.653: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:46.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:46.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:47.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:48.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:48.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:49.743: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:50.370: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:50.370: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:51.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:52.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:52.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:53.829: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:54.460: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:54.460: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:55.873: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:56.504: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:56.504: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:57.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:17:58.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:17:58.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:59.961: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:00.594: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:00.594: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:02.003: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:02.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:02.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:04.065: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:04.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:04.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:06.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:06.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:06.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:08.152: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:08.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:08.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:10.194: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:10.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:10.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:12.238: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:12.862: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:12.862: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:14.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:14.906: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:14.906: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:16.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:16.950: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:16.950: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:18.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:19.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:19.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:20.412: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:21.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:21.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:22.455: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:23.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:23.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:24.499: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:25.160: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:25.160: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:26.542: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:27.205: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:27.205: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:28.591: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:29.249: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:29.249: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:30.634: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:31.294: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:31.294: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:32.678: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:33.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:33.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:34.721: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:35.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:35.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:36.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:37.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:37.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-bt6j is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:40 +0000 UTC}]. Failure Jan 30 01:18:38.808: INFO: Condition Ready of node bootstrap-e2e-minion-group-dx3p is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 01:18:39.473: INFO: Condition Ready of node bootstrap-e2e-minion-group-hkv2 is false instead of true. Reason: NodeStatusUnknown, message: