go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 11:08:17.075from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 11:02:16.953 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 11:02:16.953 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 11:02:16.953 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 11:02:16.953 Jan 29 11:02:16.953: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 11:02:16.954 Jan 29 11:02:16.994: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 11:03:14.645 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 11:03:14.728 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 11:03:14.817 (57.864s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 11:03:14.817 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 11:03:14.817 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 11:03:14.817 Jan 29 11:03:14.998: INFO: Getting bootstrap-e2e-minion-group-3n8r Jan 29 11:03:14.999: INFO: Getting bootstrap-e2e-minion-group-90fc Jan 29 11:03:14.999: INFO: Getting bootstrap-e2e-minion-group-7sd9 Jan 29 11:03:15.042: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-90fc condition Ready to be true Jan 29 11:03:15.042: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-3n8r condition Ready to be true Jan 29 11:03:15.062: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-7sd9 condition Ready to be true Jan 29 11:03:15.187: INFO: Node bootstrap-e2e-minion-group-3n8r has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-zzqvh kube-proxy-bootstrap-e2e-minion-group-3n8r] Jan 29 11:03:15.187: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-zzqvh kube-proxy-bootstrap-e2e-minion-group-3n8r] Jan 29 11:03:15.187: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:03:15.187: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-zzqvh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:03:15.187: INFO: Node bootstrap-e2e-minion-group-90fc has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-mwf7j kube-proxy-bootstrap-e2e-minion-group-90fc] Jan 29 11:03:15.187: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-mwf7j kube-proxy-bootstrap-e2e-minion-group-90fc] Jan 29 11:03:15.187: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-90fc" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:03:15.187: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mwf7j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:03:15.195: INFO: Node bootstrap-e2e-minion-group-7sd9 has 4 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-7sd9 metadata-proxy-v0.1-ppxd4 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-47h2m] Jan 29 11:03:15.195: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-7sd9 metadata-proxy-v0.1-ppxd4 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-47h2m] Jan 29 11:03:15.195: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-47h2m" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:03:15.195: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ppxd4" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:03:15.195: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:03:15.196: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-7sd9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:03:15.255: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7sd9": Phase="Running", Reason="", readiness=true. Elapsed: 58.981409ms Jan 29 11:03:15.255: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r": Phase="Running", Reason="", readiness=true. Elapsed: 67.776763ms Jan 29 11:03:15.255: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7sd9" satisfied condition "running and ready, or succeeded" Jan 29 11:03:15.255: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" satisfied condition "running and ready, or succeeded" Jan 29 11:03:15.256: INFO: Pod "metadata-proxy-v0.1-zzqvh": Phase="Running", Reason="", readiness=true. Elapsed: 68.736486ms Jan 29 11:03:15.256: INFO: Pod "metadata-proxy-v0.1-zzqvh" satisfied condition "running and ready, or succeeded" Jan 29 11:03:15.256: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-zzqvh kube-proxy-bootstrap-e2e-minion-group-3n8r] Jan 29 11:03:15.256: INFO: Getting external IP address for bootstrap-e2e-minion-group-3n8r Jan 29 11:03:15.256: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-3n8r(34.145.60.3:22) Jan 29 11:03:15.257: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 61.701965ms Jan 29 11:03:15.257: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:03:15.259: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 63.512457ms Jan 29 11:03:15.259: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:15.261: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc": Phase="Running", Reason="", readiness=true. Elapsed: 73.214836ms Jan 29 11:03:15.261: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc" satisfied condition "running and ready, or succeeded" Jan 29 11:03:15.262: INFO: Pod "metadata-proxy-v0.1-mwf7j": Phase="Running", Reason="", readiness=true. Elapsed: 74.838747ms Jan 29 11:03:15.262: INFO: Pod "metadata-proxy-v0.1-mwf7j" satisfied condition "running and ready, or succeeded" Jan 29 11:03:15.262: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-mwf7j kube-proxy-bootstrap-e2e-minion-group-90fc] Jan 29 11:03:15.262: INFO: Getting external IP address for bootstrap-e2e-minion-group-90fc Jan 29 11:03:15.262: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-90fc(34.105.52.142:22) Jan 29 11:03:15.264: INFO: Pod "metadata-proxy-v0.1-ppxd4": Phase="Running", Reason="", readiness=true. Elapsed: 68.822676ms Jan 29 11:03:15.264: INFO: Pod "metadata-proxy-v0.1-ppxd4" satisfied condition "running and ready, or succeeded" Jan 29 11:03:15.800: INFO: ssh prow@34.145.60.3:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 11:03:15.800: INFO: ssh prow@34.145.60.3:22: stdout: "" Jan 29 11:03:15.800: INFO: ssh prow@34.145.60.3:22: stderr: "" Jan 29 11:03:15.800: INFO: ssh prow@34.145.60.3:22: exit code: 0 Jan 29 11:03:15.800: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-3n8r condition Ready to be false Jan 29 11:03:15.809: INFO: ssh prow@34.105.52.142:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 11:03:15.809: INFO: ssh prow@34.105.52.142:22: stdout: "" Jan 29 11:03:15.809: INFO: ssh prow@34.105.52.142:22: stderr: "" Jan 29 11:03:15.809: INFO: ssh prow@34.105.52.142:22: exit code: 0 Jan 29 11:03:15.809: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-90fc condition Ready to be false Jan 29 11:03:15.843: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:15.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:17.299: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.103921744s Jan 29 11:03:17.299: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:03:17.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107673094s Jan 29 11:03:17.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:17.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:17.896: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:19.299: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.103931143s Jan 29 11:03:19.299: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:03:19.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105729321s Jan 29 11:03:19.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:19.932: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:19.939: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:21.300: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.104443797s Jan 29 11:03:21.300: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:03:21.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106035195s Jan 29 11:03:21.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:21.976: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:21.984: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:23.299: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.103375406s Jan 29 11:03:23.299: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:03:23.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105496194s Jan 29 11:03:23.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:24.020: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:24.028: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:25.300: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.104198431s Jan 29 11:03:25.300: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:03:25.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 10.105962567s Jan 29 11:03:25.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:26.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:26.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:27.301: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.105339282s Jan 29 11:03:27.301: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 11:03:27.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 12.107285507s Jan 29 11:03:27.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:28.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:28.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:29.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 14.106277456s Jan 29 11:03:29.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:30.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:30.158: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:31.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 16.105385792s Jan 29 11:03:31.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:32.194: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:32.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:33.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 18.106563279s Jan 29 11:03:33.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:34.238: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:34.247: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:35.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 20.10662367s Jan 29 11:03:35.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:36.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:36.290: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:37.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 22.105850968s Jan 29 11:03:37.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:38.327: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:38.334: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:39.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 24.106946012s Jan 29 11:03:39.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:40.371: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:40.377: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:41.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 26.106728189s Jan 29 11:03:41.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:42.414: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:42.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:43.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 28.105832638s Jan 29 11:03:43.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:44.458: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:44.464: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:45.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 30.106048733s Jan 29 11:03:45.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:46.501: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:46.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:47.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 32.106522062s Jan 29 11:03:47.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:48.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:48.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:49.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 34.105432891s Jan 29 11:03:49.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:50.588: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:50.597: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:51.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 36.105402719s Jan 29 11:03:51.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:52.631: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:52.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:53.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 38.106197286s Jan 29 11:03:53.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:54.673: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:54.683: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:55.308: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 40.112820744s Jan 29 11:03:55.308: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:56.717: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:56.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:57.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 42.105518081s Jan 29 11:03:57.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:58.760: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:58.770: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:59.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 44.106583327s Jan 29 11:03:59.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:00.803: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:00.814: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:01.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 46.105949353s Jan 29 11:04:01.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:02.848: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:02.859: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:03.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 48.105595491s Jan 29 11:04:03.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:04.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:04.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:05.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 50.107360615s Jan 29 11:04:05.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:06.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:06.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:07.322: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 52.126751258s Jan 29 11:04:07.322: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:08.981: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:09.001: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:09.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 54.107271272s Jan 29 11:04:09.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:11.024: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-3n8r condition Ready to be true Jan 29 11:04:11.045: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-90fc condition Ready to be true Jan 29 11:04:11.066: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:11.088: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:11.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 56.105719892s Jan 29 11:04:11.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:13.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:13.132: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:13.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 58.106154339s Jan 29 11:04:13.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:15.154: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:15.176: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:15.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.10596995s Jan 29 11:04:15.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:17.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:17.222: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:17.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.106215107s Jan 29 11:04:17.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:19.244: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:19.266: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:19.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.106906666s Jan 29 11:04:19.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:21.288: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:21.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.106984607s Jan 29 11:04:21.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:21.311: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:23.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.105720829s Jan 29 11:04:23.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:23.331: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:23.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:25.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.105499166s Jan 29 11:04:25.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:25.375: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:25.399: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:27.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.105853524s Jan 29 11:04:27.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:27.418: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:27.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:29.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.106840021s Jan 29 11:04:29.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:29.463: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:29.487: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:31.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.105457146s Jan 29 11:04:31.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:31.506: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:31.531: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:33.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.105570162s Jan 29 11:04:33.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:33.549: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:33.574: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:35.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.106430239s Jan 29 11:04:35.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:35.592: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:35.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:37.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.105654443s Jan 29 11:04:37.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:37.643: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:37.687: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:39.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.107204678s Jan 29 11:04:39.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:39.687: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:39.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:41.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.105716939s Jan 29 11:04:41.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:41.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:41.776: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:43.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.107072859s Jan 29 11:04:43.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:43.774: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:43.819: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:45.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.10551583s Jan 29 11:04:45.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:45.818: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:45.863: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:47.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.107149559s Jan 29 11:04:47.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:47.863: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:47.908: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:49.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.106023769s Jan 29 11:04:49.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:49.906: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:49.951: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:51.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.105578344s Jan 29 11:04:51.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:51.950: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:51.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:53.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.105506992s Jan 29 11:04:53.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:53.993: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:54.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:55.316: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.120859435s Jan 29 11:04:55.316: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:56.042: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:56.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:57.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.10544128s Jan 29 11:04:57.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:58.085: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:58.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:59.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.108162943s Jan 29 11:04:59.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:00.130: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:00.201: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:01.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.106491821s Jan 29 11:05:01.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:02.175: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:02.247: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:03.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.105850406s Jan 29 11:05:03.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:04.222: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:04.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:05.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.107476208s Jan 29 11:05:05.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:06.267: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:06.336: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:07.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.105779562s Jan 29 11:05:07.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:08.312: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:08.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:09.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.106944879s Jan 29 11:05:09.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:10.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:10.424: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:11.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.106799305s Jan 29 11:05:11.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:12.401: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:12.468: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:13.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.106431077s Jan 29 11:05:13.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:14.445: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:14.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:15.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.105516813s Jan 29 11:05:15.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:16.490: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:16.559: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:17.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.106724906s Jan 29 11:05:17.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:18.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:18.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:19.305: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.109463617s Jan 29 11:05:19.305: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:20.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:20.649: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:21.306: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.110571725s Jan 29 11:05:21.306: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:22.625: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:22.694: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:23.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.106007999s Jan 29 11:05:23.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:24.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:24.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:25.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.107585541s Jan 29 11:05:25.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:26.714: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:26.783: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:27.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.105756982s Jan 29 11:05:27.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:28.759: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:28.829: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:29.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.107402263s Jan 29 11:05:29.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:30.802: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-zzqvh kube-proxy-bootstrap-e2e-minion-group-3n8r] Jan 29 11:05:30.802: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-zzqvh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:05:30.802: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:05:30.857: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r": Phase="Running", Reason="", readiness=true. Elapsed: 54.803897ms Jan 29 11:05:30.857: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" satisfied condition "running and ready, or succeeded" Jan 29 11:05:30.857: INFO: Pod "metadata-proxy-v0.1-zzqvh": Phase="Running", Reason="", readiness=true. Elapsed: 54.907245ms Jan 29 11:05:30.857: INFO: Pod "metadata-proxy-v0.1-zzqvh" satisfied condition "running and ready, or succeeded" Jan 29 11:05:30.857: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-zzqvh kube-proxy-bootstrap-e2e-minion-group-3n8r] Jan 29 11:05:30.857: INFO: Reboot successful on node bootstrap-e2e-minion-group-3n8r Jan 29 11:05:30.873: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-mwf7j kube-proxy-bootstrap-e2e-minion-group-90fc] Jan 29 11:05:30.873: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-90fc" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:05:30.873: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mwf7j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:05:30.921: INFO: Pod "metadata-proxy-v0.1-mwf7j": Phase="Running", Reason="", readiness=false. Elapsed: 47.973364ms Jan 29 11:05:30.921: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc": Phase="Running", Reason="", readiness=false. Elapsed: 48.037289ms Jan 29 11:05:30.921: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-90fc' on 'bootstrap-e2e-minion-group-90fc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:04:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:58:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC }] Jan 29 11:05:30.921: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-mwf7j' on 'bootstrap-e2e-minion-group-90fc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:04:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:00:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC }] Jan 29 11:05:31.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.106512405s Jan 29 11:05:31.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:32.966: INFO: Pod "metadata-proxy-v0.1-mwf7j": Phase="Running", Reason="", readiness=false. Elapsed: 2.092989554s Jan 29 11:05:32.966: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc": Phase="Running", Reason="", readiness=false. Elapsed: 2.093115856s Jan 29 11:05:32.966: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-mwf7j' on 'bootstrap-e2e-minion-group-90fc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:04:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:00:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC }] Jan 29 11:05:32.966: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-90fc' on 'bootstrap-e2e-minion-group-90fc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:04:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:58:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC }] Jan 29 11:05:33.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.107145872s Jan 29 11:05:33.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:34.967: INFO: Pod "metadata-proxy-v0.1-mwf7j": Phase="Running", Reason="", readiness=true. Elapsed: 4.093921458s Jan 29 11:05:34.967: INFO: Pod "metadata-proxy-v0.1-mwf7j" satisfied condition "running and ready, or succeeded" Jan 29 11:05:34.967: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc": Phase="Running", Reason="", readiness=true. Elapsed: 4.094122697s Jan 29 11:05:34.967: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc" satisfied condition "running and ready, or succeeded" Jan 29 11:05:34.967: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-mwf7j kube-proxy-bootstrap-e2e-minion-group-90fc] Jan 29 11:05:34.967: INFO: Reboot successful on node bootstrap-e2e-minion-group-90fc Jan 29 11:05:35.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.106790659s Jan 29 11:05:35.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:37.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.106148573s Jan 29 11:05:37.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:39.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.10716398s Jan 29 11:05:39.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:41.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.105635917s Jan 29 11:05:41.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:43.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.107192249s Jan 29 11:05:43.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:45.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.106259487s Jan 29 11:05:45.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:47.327: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.131339022s Jan 29 11:05:47.327: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:10.916: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m55.720797005s Jan 29 11:06:10.916: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:11.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.107578296s Jan 29 11:06:11.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:13.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.107040609s Jan 29 11:06:13.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:15.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.106616215s Jan 29 11:06:15.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:17.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.106197679s Jan 29 11:06:17.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:19.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.107144296s Jan 29 11:06:19.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:21.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.108977506s Jan 29 11:06:21.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:23.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.106071607s Jan 29 11:06:23.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:25.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.10540955s Jan 29 11:06:25.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:27.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.106018881s Jan 29 11:06:27.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:29.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.107281987s Jan 29 11:06:29.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:31.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.108349082s Jan 29 11:06:31.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:33.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.10605277s Jan 29 11:06:33.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:35.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.106487606s Jan 29 11:06:35.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:37.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.108527383s Jan 29 11:06:37.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:39.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.107112921s Jan 29 11:06:39.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:41.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.108314893s Jan 29 11:06:41.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:43.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.108547594s Jan 29 11:06:43.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:45.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.107084729s Jan 29 11:06:45.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:47.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.107917516s Jan 29 11:06:47.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:49.305: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.109888843s Jan 29 11:06:49.305: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:51.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.107845408s Jan 29 11:06:51.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:53.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.107880176s Jan 29 11:06:53.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:55.317: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.12172872s Jan 29 11:06:55.317: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:57.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.107227105s Jan 29 11:06:57.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:59.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.106921652s Jan 29 11:06:59.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:01.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.1074312s Jan 29 11:07:01.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:03.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.108676133s Jan 29 11:07:03.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:05.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.106834843s Jan 29 11:07:05.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:07.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.106787885s Jan 29 11:07:07.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:09.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.108603968s Jan 29 11:07:09.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:11.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.107879533s Jan 29 11:07:11.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:13.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.106736861s Jan 29 11:07:13.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:15.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.106264345s Jan 29 11:07:15.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:17.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.107509498s Jan 29 11:07:17.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:19.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.107325584s Jan 29 11:07:19.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:21.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.107651931s Jan 29 11:07:21.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:23.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.105909719s Jan 29 11:07:23.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:25.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.107615952s Jan 29 11:07:25.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:27.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.107811385s Jan 29 11:07:27.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:29.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.1079937s Jan 29 11:07:29.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:31.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.107444196s Jan 29 11:07:31.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:33.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.107890356s Jan 29 11:07:33.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:35.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.108256917s Jan 29 11:07:35.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:37.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.107509823s Jan 29 11:07:37.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:39.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.108626917s Jan 29 11:07:39.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:41.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.10671471s Jan 29 11:07:41.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:43.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.106062557s Jan 29 11:07:43.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:45.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.106913901s Jan 29 11:07:45.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:47.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.107151787s Jan 29 11:07:47.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:49.305: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.109648352s Jan 29 11:07:49.305: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:51.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.106884037s Jan 29 11:07:51.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:53.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.106096882s Jan 29 11:07:53.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:55.305: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.109375465s Jan 29 11:07:55.305: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:57.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.107143854s Jan 29 11:07:57.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:59.305: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.109968105s Jan 29 11:07:59.305: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:01.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.10768647s Jan 29 11:08:01.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:03.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.106322944s Jan 29 11:08:03.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:05.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.106661848s Jan 29 11:08:05.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:07.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.108718383s Jan 29 11:08:07.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:09.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.107855986s Jan 29 11:08:09.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:11.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.106723641s Jan 29 11:08:11.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:13.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.109001576s Jan 29 11:08:13.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards (Spec Runtime: 5m57.865s) test/e2e/cloud/gcp/reboot.go:136 In [It] (Node Runtime: 5m0.001s) test/e2e/cloud/gcp/reboot.go:136 Spec Goroutine goroutine 3632 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0006d5ba8?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fd091ede4a0?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fd091ede4a0?, 0xc003a3cbc0}, {0x8147108?, 0xc003a6d6c0}, {0xc0022ca1a0, 0x182}, 0xc00540fe00) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.7({0x7fd091ede4a0, 0xc003a3cbc0}) test/e2e/cloud/gcp/reboot.go:141 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc003a3cbc0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 3647 [chan receive, 6 minutes] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x7fd091ede4a0?, 0xc003a3cbc0}, {0x8147108?, 0xc003a6d6c0}, {0x76d190b, 0xb}, {0xc003e92280, 0x4, 0x4}, 0x45d964b800, ...) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReadyOrSucceeded(...) test/e2e/framework/pod/resource.go:508 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fd091ede4a0, 0xc003a3cbc0}, {0x8147108, 0xc003a6d6c0}, {0x7ffc12df95ee, 0x3}, {0xc0010c9940, 0x1f}, {0xc0022ca1a0, 0x182}) test/e2e/cloud/gcp/reboot.go:284 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 11:08:15.327: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.131843374s Jan 29 11:08:15.327: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:15.375: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.179927635s Jan 29 11:08:15.375: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:15.375: INFO: Pod kube-dns-autoscaler-5f6455f985-47h2m failed to be running and ready, or succeeded. Jan 29 11:08:15.375: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-7sd9 metadata-proxy-v0.1-ppxd4 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-47h2m] Jan 29 11:08:15.375: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 11:02:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 11:02:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.3.19 PodIPs:[{IP:10.64.3.19}] StartTime:2023-01-29 10:57:47 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 20s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(0b095899-bdc8-4503-9121-614521f752aa),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 11:01:26 +0000 UTC,FinishedAt:2023-01-29 11:02:55 +0000 UTC,ContainerID:containerd://7aa52ffd2a80100b3b8e372bac3ed9c5fa07e7b33722262869173a446eb64507,}} Ready:false RestartCount:4 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://7aa52ffd2a80100b3b8e372bac3ed9c5fa07e7b33722262869173a446eb64507 Started:0xc000d2153f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 29 11:08:15.421: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: an error on the server ("unknown") has prevented the request from succeeding (get pods volume-snapshot-controller-0): Jan 29 11:08:15.421: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: an error on the server ("unknown") has prevented the request from succeeding (get pods volume-snapshot-controller-0): Jan 29 11:08:15.421: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-47h2m: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:59:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 11:00:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP: PodIPs:[] StartTime:2023-01-29 10:57:47 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:&ContainerStateWaiting{Reason:,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:1 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://47de7bf651c6c66b4beb7067f0cd8237151462cd30542dae17a4415076b6cc9c Started:0xc000d20a9a}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 29 11:08:15.466: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-47h2m/autoscaler, err: an error on the server ("unknown") has prevented the request from succeeding (get pods kube-dns-autoscaler-5f6455f985-47h2m): Jan 29 11:08:15.466: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-47h2m/autoscaler, err: an error on the server ("unknown") has prevented the request from succeeding (get pods kube-dns-autoscaler-5f6455f985-47h2m): Jan 29 11:08:15.466: INFO: Node bootstrap-e2e-minion-group-7sd9 failed reboot test. Jan 29 11:08:15.466: INFO: Executing termination hook on nodes Jan 29 11:08:15.466: INFO: Getting external IP address for bootstrap-e2e-minion-group-3n8r Jan 29 11:08:15.466: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-3n8r(34.145.60.3:22) Jan 29 11:08:15.994: INFO: ssh prow@34.145.60.3:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 11:08:15.994: INFO: ssh prow@34.145.60.3:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 11:03:25 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 11:08:15.994: INFO: ssh prow@34.145.60.3:22: stderr: "" Jan 29 11:08:15.994: INFO: ssh prow@34.145.60.3:22: exit code: 0 Jan 29 11:08:15.994: INFO: Getting external IP address for bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:15.994: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-7sd9(34.168.47.126:22) Jan 29 11:08:16.538: INFO: ssh prow@34.168.47.126:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 11:08:16.538: INFO: ssh prow@34.168.47.126:22: stdout: "" Jan 29 11:08:16.538: INFO: ssh prow@34.168.47.126:22: stderr: "cat: /tmp/drop-inbound.log: No such file or directory\n" Jan 29 11:08:16.538: INFO: ssh prow@34.168.47.126:22: exit code: 1 Jan 29 11:08:16.538: INFO: Error while issuing ssh command: failed running "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log": <nil> (exit code 1, stderr cat: /tmp/drop-inbound.log: No such file or directory ) Jan 29 11:08:16.538: INFO: Getting external IP address for bootstrap-e2e-minion-group-90fc Jan 29 11:08:16.538: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-90fc(34.105.52.142:22) Jan 29 11:08:17.074: INFO: ssh prow@34.105.52.142:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 11:08:17.074: INFO: ssh prow@34.105.52.142:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 11:03:25 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 11:08:17.074: INFO: ssh prow@34.105.52.142:22: stderr: "" Jan 29 11:08:17.075: INFO: ssh prow@34.105.52.142:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 11:08:17.075 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 11:08:17.075 (5m2.258s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 11:08:17.075 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 11:08:17.076 Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-85z9q to bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.639797337s (2.639812936s including waiting) Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Unhealthy: Readiness probe failed: Get "http://10.64.3.17:8181/ready": dial tcp 10.64.3.17:8181: connect: connection refused Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-85z9q_kube-system(a8de34c0-3754-4f31-8c5e-d047238243e1) Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-tbk49 to bootstrap-e2e-minion-group-3n8r Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.071624726s (1.071641644s including waiting) Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Unhealthy: Liveness probe failed: Get "http://10.64.2.4:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": dial tcp 10.64.2.4:8181: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Killing: Container coredns failed liveness probe, will be restarted Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-tbk49 Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-85z9q Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-tbk49 Jan 29 11:08:17.138: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 11:08:17.138: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 11:08:17.138: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 11:08:17.138: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 11:08:17.138: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 11:08:17.138: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 11:08:17.138: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 11:08:17.138: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 11:08:17.138: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 11:08:17.138: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 11:08:17.138: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 11:08:17.138: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 29 11:08:17.138: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 11:08:17.138: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_a845b became leader Jan 29 11:08:17.138: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_a15ba became leader Jan 29 11:08:17.138: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_bd5ff became leader Jan 29 11:08:17.138: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_66f84 became leader Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-b69l8 to bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.678277477s (1.67829785s including waiting) Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-b69l8_kube-system(fae56098-57a4-4079-a8fc-75f48b84c442) Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-h9nwn to bootstrap-e2e-minion-group-3n8r Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 676.129847ms (676.140226ms including waiting) Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Killing: Stopping container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-h9nwn_kube-system(0ac52dd7-f76d-4f28-9d8a-8af2e2676683) Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Unhealthy: Liveness probe failed: Get "http://10.64.2.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-kxtrk to bootstrap-e2e-minion-group-90fc Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 824.620705ms (824.644728ms including waiting) Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Liveness probe failed: Get "http://10.64.1.5:8093/healthz": dial tcp 10.64.1.5:8093: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Liveness probe failed: Get "http://10.64.1.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 11:08:17.138: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-b69l8 Jan 29 11:08:17.138: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-kxtrk Jan 29 11:08:17.138: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-h9nwn Jan 29 11:08:17.138: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 11:08:17.138: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 11:08:17.138: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 11:08:17.138: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://127.0.0.1:8133/healthz": dial tcp 127.0.0.1:8133: connect: connection refused Jan 29 11:08:17.138: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 11:08:17.138: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 11:08:17.138: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 11:08:17.138: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 11:08:17.138: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 11:08:17.138: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 11:08:17.138: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 11:08:17.138: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 11:08:17.138: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 11:08:17.138: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:08:17.138: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 11:08:17.138: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 11:08:17.138: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 11:08:17.138: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 11:08:17.138: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_0298c03a-3832-4855-a2af-cf203f6d5229 became leader Jan 29 11:08:17.138: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_b64428ec-4368-4776-ac50-8d5ce5d3c3d7 became leader Jan 29 11:08:17.138: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_93420249-344c-40fd-8874-2327496da9f4 became leader Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-47h2m to bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.618413775s (1.618457503s including waiting) Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container autoscaler Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container autoscaler Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container autoscaler Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-47h2m Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Killing: Stopping container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-3n8r_kube-system(b5176a347e88e1ff4660b164d3f16916) Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Killing: Stopping container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-3n8r_kube-system(b5176a347e88e1ff4660b164d3f16916) Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7sd9_kube-system(20e39278d9aad8613df3183ed37c4881) Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Killing: Stopping container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-90fc_kube-system(81cae927179b6a5281a90fdaa765ded2) Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:08:17.138: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 11:08:17.138: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 11:08:17.138: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 11:08:17.138: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 11:08:17.138: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_425c93d9-4e38-470f-b4ba-e1a7e536d147 became leader Jan 29 11:08:17.138: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c84835d1-579f-4af3-bbe9-2d8899072690 became leader Jan 29 11:08:17.138: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_00ed0cb9-b982-4f69-9378-8d53a0626551 became leader Jan 29 11:08:17.138: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_3c66f968-f07b-4c3a-8b08-d3d24ec883af became leader Jan 29 11:08:17.138: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_323c779c-76b3-4e92-ab66-cc172e33c203 became leader Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-fqgll to bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 660.702719ms (660.716002ms including waiting) Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container default-http-backend Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container default-http-backend Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container default-http-backend Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container default-http-backend Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-fqgll Jan 29 11:08:17.138: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 11:08:17.138: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 11:08:17.138: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 11:08:17.138: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 11:08:17.138: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 11:08:17.138: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 11:08:17.138: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-9whkb to bootstrap-e2e-master Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 778.60477ms (778.615516ms including waiting) Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.176461611s (2.176470734s including waiting) Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-mwf7j to bootstrap-e2e-minion-group-90fc Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 848.523206ms (848.543058ms including waiting) Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.257052519s (2.25706204s including waiting) Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-ppxd4 to bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 812.309842ms (812.385382ms including waiting) Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.092848104s (2.092909933s including waiting) Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-zzqvh to bootstrap-e2e-minion-group-3n8r Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 882.001192ms (882.012724ms including waiting) Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.161701332s (2.161712043s including waiting) Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-9whkb Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-mwf7j Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-zzqvh Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-ppxd4 Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-6vkcg to bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.5552357s (2.555248653s including waiting) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container metrics-server Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container metrics-server Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.443599616s (2.443627566s including waiting) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container metrics-server-nanny Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container metrics-server-nanny Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container metrics-server Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container metrics-server-nanny Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-6vkcg Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-6vkcg Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-vfwlz to bootstrap-e2e-minion-group-90fc Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.439956154s (1.43999999s including waiting) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container metrics-server Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container metrics-server Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.063883654s (1.063902072s including waiting) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container metrics-server-nanny Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container metrics-server-nanny Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container metrics-server Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container metrics-server Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container metrics-server-nanny Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container metrics-server-nanny Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Liveness probe failed: Get "https://10.64.1.4:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Killing: Stopping container metrics-server Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Killing: Stopping container metrics-server-nanny Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-vfwlz_kube-system(43862482-416e-4d81-a91d-a9986c67b520) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-vfwlz_kube-system(43862482-416e-4d81-a91d-a9986c67b520) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-vfwlz Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-vfwlz Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.131380985s (3.131396318s including waiting) Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container volume-snapshot-controller Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container volume-snapshot-controller Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container volume-snapshot-controller Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(0b095899-bdc8-4503-9121-614521f752aa) Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container volume-snapshot-controller Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container volume-snapshot-controller Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container volume-snapshot-controller Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(0b095899-bdc8-4503-9121-614521f752aa) Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 11:08:17.138 (64ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 11:08:17.138 Jan 29 11:08:17.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 11:08:17.187 (48ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 11:08:17.187 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 11:08:17.187 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 11:08:17.187 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 11:08:17.189 STEP: Collecting events from namespace "reboot-9358". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 11:08:17.189 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 11:08:17.231 Jan 29 11:08:17.273: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 11:08:17.273: INFO: Jan 29 11:08:17.323: INFO: Logging node info for node bootstrap-e2e-master Jan 29 11:08:17.367: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 13fcdc99-d52b-4449-9d12-c22cc2165092 1478 0 2023-01-29 10:57:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 10:57:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 10:57:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 10:57:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 11:03:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://gce-up-c1-3-g1-4-up-clu-n/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 10:57:47 +0000 UTC,LastTransitionTime:2023-01-29 10:57:47 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 11:03:26 +0000 UTC,LastTransitionTime:2023-01-29 10:57:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 11:03:26 +0000 UTC,LastTransitionTime:2023-01-29 10:57:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 11:03:26 +0000 UTC,LastTransitionTime:2023-01-29 10:57:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 11:03:26 +0000 UTC,LastTransitionTime:2023-01-29 10:57:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.82.171.183,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.gce-up-c1-3-g1-4-up-clu-n.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.gce-up-c1-3-g1-4-up-clu-n.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6e19589febb4d719b2a61e5595f77136,SystemUUID:6e19589f-ebb4-d719-b2a6-1e5595f77136,BootID:29bc0c62-e047-4b19-8209-442f993828f4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-4-gfbf145b31,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 11:08:17.367: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 11:08:17.416: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 11:08:17.492: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 10:57:03 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.492: INFO: Container l7-lb-controller ready: true, restart count 5 Jan 29 11:08:17.492: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 10:56:45 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.492: INFO: Container kube-apiserver ready: true, restart count 1 Jan 29 11:08:17.492: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 10:56:45 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.492: INFO: Container kube-controller-manager ready: true, restart count 4 Jan 29 11:08:17.492: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 10:57:03 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.492: INFO: Container kube-addon-manager ready: true, restart count 1 Jan 29 11:08:17.492: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 10:56:45 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.492: INFO: Container konnectivity-server-container ready: true, restart count 3 Jan 29 11:08:17.492: INFO: metadata-proxy-v0.1-9whkb started at 2023-01-29 10:57:53 +0000 UTC (0+2 container statuses recorded) Jan 29 11:08:17.492: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 11:08:17.492: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 11:08:17.492: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 10:56:44 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.492: INFO: Container kube-scheduler ready: true, restart count 4 Jan 29 11:08:17.492: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 10:56:44 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.492: INFO: Container etcd-container ready: true, restart count 1 Jan 29 11:08:17.492: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 10:56:45 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.492: INFO: Container etcd-container ready: true, restart count 2 Jan 29 11:08:17.777: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 11:08:17.777: INFO: Logging node info for node bootstrap-e2e-minion-group-3n8r Jan 29 11:08:17.822: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-3n8r 2308ea2b-6f43-4767-9035-72a71358d4e8 1764 0 2023-01-29 10:57:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-3n8r kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 10:57:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 11:04:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 11:05:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 11:05:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 11:05:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://gce-up-c1-3-g1-4-up-clu-n/us-west1-b/bootstrap-e2e-minion-group-3n8r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:33 +0000 UTC,LastTransitionTime:2023-01-29 11:00:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 11:05:33 +0000 UTC,LastTransitionTime:2023-01-29 11:00:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 11:05:33 +0000 UTC,LastTransitionTime:2023-01-29 11:00:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 11:05:33 +0000 UTC,LastTransitionTime:2023-01-29 11:00:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 11:05:33 +0000 UTC,LastTransitionTime:2023-01-29 11:00:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:33 +0000 UTC,LastTransitionTime:2023-01-29 11:00:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:33 +0000 UTC,LastTransitionTime:2023-01-29 11:00:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 10:57:47 +0000 UTC,LastTransitionTime:2023-01-29 10:57:47 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:25 +0000 UTC,LastTransitionTime:2023-01-29 11:05:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:25 +0000 UTC,LastTransitionTime:2023-01-29 11:05:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:25 +0000 UTC,LastTransitionTime:2023-01-29 11:05:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 11:05:25 +0000 UTC,LastTransitionTime:2023-01-29 11:05:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.145.60.3,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-3n8r.c.gce-up-c1-3-g1-4-up-clu-n.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-3n8r.c.gce-up-c1-3-g1-4-up-clu-n.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:74dbd23ceb30f7e1b9778ca9043a85b7,SystemUUID:74dbd23c-eb30-f7e1-b977-8ca9043a85b7,BootID:f5724939-2ebb-4c25-bed7-3c19855449d6,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-4-gfbf145b31,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 11:08:17.822: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-3n8r Jan 29 11:08:17.880: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-3n8r Jan 29 11:08:17.988: INFO: coredns-6846b5b5f-tbk49 started at 2023-01-29 10:57:57 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.988: INFO: Container coredns ready: true, restart count 2 Jan 29 11:08:17.988: INFO: kube-proxy-bootstrap-e2e-minion-group-3n8r started at 2023-01-29 10:57:31 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.988: INFO: Container kube-proxy ready: false, restart count 5 Jan 29 11:08:17.988: INFO: metadata-proxy-v0.1-zzqvh started at 2023-01-29 10:57:32 +0000 UTC (0+2 container statuses recorded) Jan 29 11:08:17.988: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 11:08:17.988: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 11:08:17.988: INFO: konnectivity-agent-h9nwn started at 2023-01-29 10:57:47 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.988: INFO: Container konnectivity-agent ready: true, restart count 4 Jan 29 11:08:18.175: INFO: Latency metrics for node bootstrap-e2e-minion-group-3n8r Jan 29 11:08:18.175: INFO: Logging node info for node bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:18.220: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-7sd9 71363d69-32a3-46a0-a1ba-c7e4cd4f021b 1778 0 2023-01-29 10:57:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-7sd9 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 10:57:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 10:59:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 11:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 11:05:37 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 11:05:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://gce-up-c1-3-g1-4-up-clu-n/us-west1-b/bootstrap-e2e-minion-group-7sd9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:35 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:35 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:35 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:35 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:35 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:35 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:35 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 10:57:47 +0000 UTC,LastTransitionTime:2023-01-29 10:57:47 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.47.126,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-7sd9.c.gce-up-c1-3-g1-4-up-clu-n.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-7sd9.c.gce-up-c1-3-g1-4-up-clu-n.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8b45e6d3d45c059d9dc5a25fed23489d,SystemUUID:8b45e6d3-d45c-059d-9dc5-a25fed23489d,BootID:59628d06-8aa4-40bc-8a1a-a94d9cd48de1,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-4-gfbf145b31,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 11:08:18.223: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:18.270: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:18.342: INFO: metadata-proxy-v0.1-ppxd4 started at 2023-01-29 10:57:35 +0000 UTC (0+2 container statuses recorded) Jan 29 11:08:18.342: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 11:08:18.342: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 11:08:18.342: INFO: konnectivity-agent-b69l8 started at 2023-01-29 10:57:47 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:18.342: INFO: Container konnectivity-agent ready: true, restart count 3 Jan 29 11:08:18.342: INFO: kube-proxy-bootstrap-e2e-minion-group-7sd9 started at 2023-01-29 10:57:34 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:18.342: INFO: Container kube-proxy ready: true, restart count 3 Jan 29 11:08:18.342: INFO: l7-default-backend-8549d69d99-fqgll started at 2023-01-29 10:57:47 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:18.342: INFO: Container default-http-backend ready: true, restart count 1 Jan 29 11:08:18.342: INFO: volume-snapshot-controller-0 started at 2023-01-29 10:57:47 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:18.342: INFO: Container volume-snapshot-controller ready: true, restart count 7 Jan 29 11:08:18.342: INFO: coredns-6846b5b5f-85z9q started at 2023-01-29 10:57:47 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:18.342: INFO: Container coredns ready: false, restart count 2 Jan 29 11:08:18.342: INFO: kube-dns-autoscaler-5f6455f985-47h2m started at 2023-01-29 10:57:47 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:18.342: INFO: Container autoscaler ready: false, restart count 1 Jan 29 11:08:18.526: INFO: Latency metrics for node bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:18.526: INFO: Logging node info for node bootstrap-e2e-minion-group-90fc Jan 29 11:08:18.570: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-90fc 7e9be70e-bdfd-46c5-b708-36a329fba312 1765 0 2023-01-29 10:57:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-90fc kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 10:57:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 11:04:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 11:05:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 11:05:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 11:05:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://gce-up-c1-3-g1-4-up-clu-n/us-west1-b/bootstrap-e2e-minion-group-90fc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:34 +0000 UTC,LastTransitionTime:2023-01-29 11:00:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:34 +0000 UTC,LastTransitionTime:2023-01-29 11:00:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 11:05:34 +0000 UTC,LastTransitionTime:2023-01-29 11:00:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 11:05:34 +0000 UTC,LastTransitionTime:2023-01-29 11:00:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 11:05:34 +0000 UTC,LastTransitionTime:2023-01-29 11:00:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 11:05:34 +0000 UTC,LastTransitionTime:2023-01-29 11:00:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:34 +0000 UTC,LastTransitionTime:2023-01-29 11:00:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 10:57:47 +0000 UTC,LastTransitionTime:2023-01-29 10:57:47 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:26 +0000 UTC,LastTransitionTime:2023-01-29 11:05:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:26 +0000 UTC,LastTransitionTime:2023-01-29 11:05:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:26 +0000 UTC,LastTransitionTime:2023-01-29 11:05:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 11:05:26 +0000 UTC,LastTransitionTime:2023-01-29 11:05:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.105.52.142,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-90fc.c.gce-up-c1-3-g1-4-up-clu-n.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-90fc.c.gce-up-c1-3-g1-4-up-clu-n.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf512e8691a9af13f2b194f33a3d645,SystemUUID:7cf512e8-691a-9af1-3f2b-194f33a3d645,BootID:34f2816f-fe16-46e6-bcdf-b877a0c0c870,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-4-gfbf145b31,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 11:08:18.570: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-90fc Jan 29 11:08:18.616: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-90fc Jan 29 11:08:18.685: INFO: konnectivity-agent-kxtrk started at 2023-01-29 10:57:47 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:18.685: INFO: Container konnectivity-agent ready: true, restart count 3 Jan 29 11:08:18.685: INFO: metrics-server-v0.5.2-867b8754b9-vfwlz started at 2023-01-29 10:58:11 +0000 UTC (0+2 container statuses recorded) Jan 29 11:08:18.685: INFO: Container metrics-server ready: false, restart count 4 Jan 29 11:08:18.685: INFO: Container metrics-server-nanny ready: false, restart count 5 Jan 29 11:08:18.685: INFO: kube-proxy-bootstrap-e2e-minion-group-90fc started at 2023-01-29 10:57:31 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:18.685: INFO: Container kube-proxy ready: true, restart count 3 Jan 29 11:08:18.685: INFO: metadata-proxy-v0.1-mwf7j started at 2023-01-29 10:57:32 +0000 UTC (0+2 container statuses recorded) Jan 29 11:08:18.685: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 11:08:18.685: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 11:08:18.875: INFO: Latency metrics for node bootstrap-e2e-minion-group-90fc END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 11:08:18.875 (1.686s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 11:08:18.875 (1.688s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 11:08:18.875 STEP: Destroying namespace "reboot-9358" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 11:08:18.875 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 11:08:18.92 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 11:08:18.921 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 11:08:18.922 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 11:08:17.075from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 11:02:16.953 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 11:02:16.953 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 11:02:16.953 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 11:02:16.953 Jan 29 11:02:16.953: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 11:02:16.954 Jan 29 11:02:16.994: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 11:03:14.645 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 11:03:14.728 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 11:03:14.817 (57.864s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 11:03:14.817 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 11:03:14.817 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 11:03:14.817 Jan 29 11:03:14.998: INFO: Getting bootstrap-e2e-minion-group-3n8r Jan 29 11:03:14.999: INFO: Getting bootstrap-e2e-minion-group-90fc Jan 29 11:03:14.999: INFO: Getting bootstrap-e2e-minion-group-7sd9 Jan 29 11:03:15.042: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-90fc condition Ready to be true Jan 29 11:03:15.042: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-3n8r condition Ready to be true Jan 29 11:03:15.062: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-7sd9 condition Ready to be true Jan 29 11:03:15.187: INFO: Node bootstrap-e2e-minion-group-3n8r has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-zzqvh kube-proxy-bootstrap-e2e-minion-group-3n8r] Jan 29 11:03:15.187: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-zzqvh kube-proxy-bootstrap-e2e-minion-group-3n8r] Jan 29 11:03:15.187: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:03:15.187: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-zzqvh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:03:15.187: INFO: Node bootstrap-e2e-minion-group-90fc has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-mwf7j kube-proxy-bootstrap-e2e-minion-group-90fc] Jan 29 11:03:15.187: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-mwf7j kube-proxy-bootstrap-e2e-minion-group-90fc] Jan 29 11:03:15.187: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-90fc" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:03:15.187: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mwf7j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:03:15.195: INFO: Node bootstrap-e2e-minion-group-7sd9 has 4 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-7sd9 metadata-proxy-v0.1-ppxd4 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-47h2m] Jan 29 11:03:15.195: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-7sd9 metadata-proxy-v0.1-ppxd4 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-47h2m] Jan 29 11:03:15.195: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-47h2m" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:03:15.195: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ppxd4" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:03:15.195: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:03:15.196: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-7sd9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:03:15.255: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7sd9": Phase="Running", Reason="", readiness=true. Elapsed: 58.981409ms Jan 29 11:03:15.255: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r": Phase="Running", Reason="", readiness=true. Elapsed: 67.776763ms Jan 29 11:03:15.255: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7sd9" satisfied condition "running and ready, or succeeded" Jan 29 11:03:15.255: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" satisfied condition "running and ready, or succeeded" Jan 29 11:03:15.256: INFO: Pod "metadata-proxy-v0.1-zzqvh": Phase="Running", Reason="", readiness=true. Elapsed: 68.736486ms Jan 29 11:03:15.256: INFO: Pod "metadata-proxy-v0.1-zzqvh" satisfied condition "running and ready, or succeeded" Jan 29 11:03:15.256: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-zzqvh kube-proxy-bootstrap-e2e-minion-group-3n8r] Jan 29 11:03:15.256: INFO: Getting external IP address for bootstrap-e2e-minion-group-3n8r Jan 29 11:03:15.256: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-3n8r(34.145.60.3:22) Jan 29 11:03:15.257: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 61.701965ms Jan 29 11:03:15.257: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:03:15.259: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 63.512457ms Jan 29 11:03:15.259: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:15.261: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc": Phase="Running", Reason="", readiness=true. Elapsed: 73.214836ms Jan 29 11:03:15.261: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc" satisfied condition "running and ready, or succeeded" Jan 29 11:03:15.262: INFO: Pod "metadata-proxy-v0.1-mwf7j": Phase="Running", Reason="", readiness=true. Elapsed: 74.838747ms Jan 29 11:03:15.262: INFO: Pod "metadata-proxy-v0.1-mwf7j" satisfied condition "running and ready, or succeeded" Jan 29 11:03:15.262: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-mwf7j kube-proxy-bootstrap-e2e-minion-group-90fc] Jan 29 11:03:15.262: INFO: Getting external IP address for bootstrap-e2e-minion-group-90fc Jan 29 11:03:15.262: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-90fc(34.105.52.142:22) Jan 29 11:03:15.264: INFO: Pod "metadata-proxy-v0.1-ppxd4": Phase="Running", Reason="", readiness=true. Elapsed: 68.822676ms Jan 29 11:03:15.264: INFO: Pod "metadata-proxy-v0.1-ppxd4" satisfied condition "running and ready, or succeeded" Jan 29 11:03:15.800: INFO: ssh prow@34.145.60.3:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 11:03:15.800: INFO: ssh prow@34.145.60.3:22: stdout: "" Jan 29 11:03:15.800: INFO: ssh prow@34.145.60.3:22: stderr: "" Jan 29 11:03:15.800: INFO: ssh prow@34.145.60.3:22: exit code: 0 Jan 29 11:03:15.800: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-3n8r condition Ready to be false Jan 29 11:03:15.809: INFO: ssh prow@34.105.52.142:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 11:03:15.809: INFO: ssh prow@34.105.52.142:22: stdout: "" Jan 29 11:03:15.809: INFO: ssh prow@34.105.52.142:22: stderr: "" Jan 29 11:03:15.809: INFO: ssh prow@34.105.52.142:22: exit code: 0 Jan 29 11:03:15.809: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-90fc condition Ready to be false Jan 29 11:03:15.843: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:15.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:17.299: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.103921744s Jan 29 11:03:17.299: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:03:17.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107673094s Jan 29 11:03:17.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:17.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:17.896: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:19.299: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.103931143s Jan 29 11:03:19.299: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:03:19.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105729321s Jan 29 11:03:19.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:19.932: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:19.939: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:21.300: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.104443797s Jan 29 11:03:21.300: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:03:21.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106035195s Jan 29 11:03:21.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:21.976: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:21.984: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:23.299: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.103375406s Jan 29 11:03:23.299: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:03:23.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105496194s Jan 29 11:03:23.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:24.020: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:24.028: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:25.300: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.104198431s Jan 29 11:03:25.300: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:02:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:03:25.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 10.105962567s Jan 29 11:03:25.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:26.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:26.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:27.301: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.105339282s Jan 29 11:03:27.301: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 11:03:27.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 12.107285507s Jan 29 11:03:27.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:28.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:28.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:29.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 14.106277456s Jan 29 11:03:29.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:30.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:30.158: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:31.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 16.105385792s Jan 29 11:03:31.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:32.194: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:32.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:33.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 18.106563279s Jan 29 11:03:33.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:34.238: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:34.247: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:35.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 20.10662367s Jan 29 11:03:35.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:36.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:36.290: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:37.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 22.105850968s Jan 29 11:03:37.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:38.327: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:38.334: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:39.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 24.106946012s Jan 29 11:03:39.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:40.371: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:40.377: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:41.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 26.106728189s Jan 29 11:03:41.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:42.414: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:42.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:43.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 28.105832638s Jan 29 11:03:43.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:44.458: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:44.464: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:45.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 30.106048733s Jan 29 11:03:45.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:46.501: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:46.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:47.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 32.106522062s Jan 29 11:03:47.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:48.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:48.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:49.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 34.105432891s Jan 29 11:03:49.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:50.588: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:50.597: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:51.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 36.105402719s Jan 29 11:03:51.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:52.631: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:52.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:53.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 38.106197286s Jan 29 11:03:53.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:54.673: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:54.683: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:55.308: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 40.112820744s Jan 29 11:03:55.308: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:56.717: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:56.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:57.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 42.105518081s Jan 29 11:03:57.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:03:58.760: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:58.770: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:03:59.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 44.106583327s Jan 29 11:03:59.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:00.803: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:00.814: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:01.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 46.105949353s Jan 29 11:04:01.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:02.848: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:02.859: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:03.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 48.105595491s Jan 29 11:04:03.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:04.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:04.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:05.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 50.107360615s Jan 29 11:04:05.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:06.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:06.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:07.322: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 52.126751258s Jan 29 11:04:07.322: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:08.981: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:09.001: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:04:09.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 54.107271272s Jan 29 11:04:09.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:11.024: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-3n8r condition Ready to be true Jan 29 11:04:11.045: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-90fc condition Ready to be true Jan 29 11:04:11.066: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:11.088: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:11.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 56.105719892s Jan 29 11:04:11.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:13.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:13.132: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:13.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 58.106154339s Jan 29 11:04:13.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:15.154: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:15.176: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:15.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.10596995s Jan 29 11:04:15.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:17.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:17.222: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:17.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.106215107s Jan 29 11:04:17.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:19.244: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:19.266: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:19.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.106906666s Jan 29 11:04:19.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:21.288: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:21.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.106984607s Jan 29 11:04:21.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:21.311: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:23.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.105720829s Jan 29 11:04:23.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:23.331: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:04:23.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:25.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.105499166s Jan 29 11:04:25.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:25.375: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:25.399: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:27.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.105853524s Jan 29 11:04:27.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:27.418: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:27.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:29.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.106840021s Jan 29 11:04:29.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:29.463: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:29.487: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:31.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.105457146s Jan 29 11:04:31.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:31.506: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:31.531: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:33.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.105570162s Jan 29 11:04:33.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:33.549: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:33.574: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:35.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.106430239s Jan 29 11:04:35.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:35.592: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:35.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:37.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.105654443s Jan 29 11:04:37.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:37.643: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:37.687: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:39.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.107204678s Jan 29 11:04:39.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:39.687: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:39.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:41.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.105716939s Jan 29 11:04:41.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:41.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:41.776: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:43.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.107072859s Jan 29 11:04:43.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:43.774: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:43.819: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:45.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.10551583s Jan 29 11:04:45.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:45.818: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:45.863: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:47.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.107149559s Jan 29 11:04:47.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:47.863: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:47.908: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:49.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.106023769s Jan 29 11:04:49.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:49.906: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:49.951: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:51.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.105578344s Jan 29 11:04:51.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:51.950: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:51.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:53.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.105506992s Jan 29 11:04:53.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:53.993: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:54.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:55.316: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.120859435s Jan 29 11:04:55.316: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:56.042: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:56.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:57.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.10544128s Jan 29 11:04:57.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:04:58.085: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:04:58.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:04:59.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.108162943s Jan 29 11:04:59.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:00.130: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:00.201: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:01.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.106491821s Jan 29 11:05:01.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:02.175: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:02.247: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:03.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.105850406s Jan 29 11:05:03.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:04.222: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:04.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:05.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.107476208s Jan 29 11:05:05.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:06.267: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:06.336: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:07.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.105779562s Jan 29 11:05:07.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:08.312: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:08.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:09.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.106944879s Jan 29 11:05:09.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:10.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:10.424: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:11.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.106799305s Jan 29 11:05:11.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:12.401: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:12.468: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:13.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.106431077s Jan 29 11:05:13.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:14.445: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:14.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:15.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.105516813s Jan 29 11:05:15.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:16.490: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:16.559: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:17.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.106724906s Jan 29 11:05:17.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:18.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:18.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:19.305: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.109463617s Jan 29 11:05:19.305: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:20.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:20.649: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:21.306: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.110571725s Jan 29 11:05:21.306: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:22.625: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:22.694: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:23.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.106007999s Jan 29 11:05:23.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:24.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:24.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:04:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:25.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.107585541s Jan 29 11:05:25.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:26.714: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:26.783: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:27.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.105756982s Jan 29 11:05:27.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:28.759: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:25 +0000 UTC}]. Failure Jan 29 11:05:28.829: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 11:04:14 +0000 UTC}]. Failure Jan 29 11:05:29.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.107402263s Jan 29 11:05:29.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:30.802: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-zzqvh kube-proxy-bootstrap-e2e-minion-group-3n8r] Jan 29 11:05:30.802: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-zzqvh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:05:30.802: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:05:30.857: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r": Phase="Running", Reason="", readiness=true. Elapsed: 54.803897ms Jan 29 11:05:30.857: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" satisfied condition "running and ready, or succeeded" Jan 29 11:05:30.857: INFO: Pod "metadata-proxy-v0.1-zzqvh": Phase="Running", Reason="", readiness=true. Elapsed: 54.907245ms Jan 29 11:05:30.857: INFO: Pod "metadata-proxy-v0.1-zzqvh" satisfied condition "running and ready, or succeeded" Jan 29 11:05:30.857: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-zzqvh kube-proxy-bootstrap-e2e-minion-group-3n8r] Jan 29 11:05:30.857: INFO: Reboot successful on node bootstrap-e2e-minion-group-3n8r Jan 29 11:05:30.873: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-mwf7j kube-proxy-bootstrap-e2e-minion-group-90fc] Jan 29 11:05:30.873: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-90fc" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:05:30.873: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mwf7j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:05:30.921: INFO: Pod "metadata-proxy-v0.1-mwf7j": Phase="Running", Reason="", readiness=false. Elapsed: 47.973364ms Jan 29 11:05:30.921: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc": Phase="Running", Reason="", readiness=false. Elapsed: 48.037289ms Jan 29 11:05:30.921: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-90fc' on 'bootstrap-e2e-minion-group-90fc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:04:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:58:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC }] Jan 29 11:05:30.921: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-mwf7j' on 'bootstrap-e2e-minion-group-90fc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:04:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:00:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC }] Jan 29 11:05:31.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.106512405s Jan 29 11:05:31.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:32.966: INFO: Pod "metadata-proxy-v0.1-mwf7j": Phase="Running", Reason="", readiness=false. Elapsed: 2.092989554s Jan 29 11:05:32.966: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc": Phase="Running", Reason="", readiness=false. Elapsed: 2.093115856s Jan 29 11:05:32.966: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-mwf7j' on 'bootstrap-e2e-minion-group-90fc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:04:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:00:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC }] Jan 29 11:05:32.966: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-90fc' on 'bootstrap-e2e-minion-group-90fc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:04:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:58:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC }] Jan 29 11:05:33.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.107145872s Jan 29 11:05:33.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:34.967: INFO: Pod "metadata-proxy-v0.1-mwf7j": Phase="Running", Reason="", readiness=true. Elapsed: 4.093921458s Jan 29 11:05:34.967: INFO: Pod "metadata-proxy-v0.1-mwf7j" satisfied condition "running and ready, or succeeded" Jan 29 11:05:34.967: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc": Phase="Running", Reason="", readiness=true. Elapsed: 4.094122697s Jan 29 11:05:34.967: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc" satisfied condition "running and ready, or succeeded" Jan 29 11:05:34.967: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-mwf7j kube-proxy-bootstrap-e2e-minion-group-90fc] Jan 29 11:05:34.967: INFO: Reboot successful on node bootstrap-e2e-minion-group-90fc Jan 29 11:05:35.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.106790659s Jan 29 11:05:35.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:37.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.106148573s Jan 29 11:05:37.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:39.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.10716398s Jan 29 11:05:39.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:41.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.105635917s Jan 29 11:05:41.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:43.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.107192249s Jan 29 11:05:43.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:45.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.106259487s Jan 29 11:05:45.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:05:47.327: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.131339022s Jan 29 11:05:47.327: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:10.916: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m55.720797005s Jan 29 11:06:10.916: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:11.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.107578296s Jan 29 11:06:11.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:13.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.107040609s Jan 29 11:06:13.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:15.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.106616215s Jan 29 11:06:15.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:17.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.106197679s Jan 29 11:06:17.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:19.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.107144296s Jan 29 11:06:19.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:21.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.108977506s Jan 29 11:06:21.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:23.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.106071607s Jan 29 11:06:23.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:25.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.10540955s Jan 29 11:06:25.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:27.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.106018881s Jan 29 11:06:27.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:29.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.107281987s Jan 29 11:06:29.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:31.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.108349082s Jan 29 11:06:31.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:33.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.10605277s Jan 29 11:06:33.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:35.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.106487606s Jan 29 11:06:35.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:37.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.108527383s Jan 29 11:06:37.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:39.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.107112921s Jan 29 11:06:39.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:41.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.108314893s Jan 29 11:06:41.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:43.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.108547594s Jan 29 11:06:43.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:45.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.107084729s Jan 29 11:06:45.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:47.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.107917516s Jan 29 11:06:47.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:49.305: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.109888843s Jan 29 11:06:49.305: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:51.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.107845408s Jan 29 11:06:51.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:53.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.107880176s Jan 29 11:06:53.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:55.317: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.12172872s Jan 29 11:06:55.317: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:57.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.107227105s Jan 29 11:06:57.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:06:59.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.106921652s Jan 29 11:06:59.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:01.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.1074312s Jan 29 11:07:01.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:03.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.108676133s Jan 29 11:07:03.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:05.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.106834843s Jan 29 11:07:05.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:07.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.106787885s Jan 29 11:07:07.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:09.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.108603968s Jan 29 11:07:09.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:11.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.107879533s Jan 29 11:07:11.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:13.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.106736861s Jan 29 11:07:13.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:15.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.106264345s Jan 29 11:07:15.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:17.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.107509498s Jan 29 11:07:17.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:19.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.107325584s Jan 29 11:07:19.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:21.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.107651931s Jan 29 11:07:21.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:23.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.105909719s Jan 29 11:07:23.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:25.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.107615952s Jan 29 11:07:25.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:27.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.107811385s Jan 29 11:07:27.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:29.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.1079937s Jan 29 11:07:29.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:31.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.107444196s Jan 29 11:07:31.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:33.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.107890356s Jan 29 11:07:33.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:35.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.108256917s Jan 29 11:07:35.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:37.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.107509823s Jan 29 11:07:37.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:39.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.108626917s Jan 29 11:07:39.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:41.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.10671471s Jan 29 11:07:41.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:43.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.106062557s Jan 29 11:07:43.301: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:45.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.106913901s Jan 29 11:07:45.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:47.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.107151787s Jan 29 11:07:47.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:49.305: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.109648352s Jan 29 11:07:49.305: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:51.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.106884037s Jan 29 11:07:51.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:53.301: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.106096882s Jan 29 11:07:53.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:55.305: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.109375465s Jan 29 11:07:55.305: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:57.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.107143854s Jan 29 11:07:57.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:07:59.305: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.109968105s Jan 29 11:07:59.305: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:01.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.10768647s Jan 29 11:08:01.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:03.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.106322944s Jan 29 11:08:03.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:05.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.106661848s Jan 29 11:08:05.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:07.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.108718383s Jan 29 11:08:07.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:09.303: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.107855986s Jan 29 11:08:09.303: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:11.302: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.106723641s Jan 29 11:08:11.302: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:13.304: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.109001576s Jan 29 11:08:13.304: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards (Spec Runtime: 5m57.865s) test/e2e/cloud/gcp/reboot.go:136 In [It] (Node Runtime: 5m0.001s) test/e2e/cloud/gcp/reboot.go:136 Spec Goroutine goroutine 3632 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0006d5ba8?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fd091ede4a0?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fd091ede4a0?, 0xc003a3cbc0}, {0x8147108?, 0xc003a6d6c0}, {0xc0022ca1a0, 0x182}, 0xc00540fe00) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.7({0x7fd091ede4a0, 0xc003a3cbc0}) test/e2e/cloud/gcp/reboot.go:141 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc003a3cbc0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 3647 [chan receive, 6 minutes] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x7fd091ede4a0?, 0xc003a3cbc0}, {0x8147108?, 0xc003a6d6c0}, {0x76d190b, 0xb}, {0xc003e92280, 0x4, 0x4}, 0x45d964b800, ...) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReadyOrSucceeded(...) test/e2e/framework/pod/resource.go:508 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fd091ede4a0, 0xc003a3cbc0}, {0x8147108, 0xc003a6d6c0}, {0x7ffc12df95ee, 0x3}, {0xc0010c9940, 0x1f}, {0xc0022ca1a0, 0x182}) test/e2e/cloud/gcp/reboot.go:284 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 11:08:15.327: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.131843374s Jan 29 11:08:15.327: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:15.375: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.179927635s Jan 29 11:08:15.375: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:08:15.375: INFO: Pod kube-dns-autoscaler-5f6455f985-47h2m failed to be running and ready, or succeeded. Jan 29 11:08:15.375: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-7sd9 metadata-proxy-v0.1-ppxd4 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-47h2m] Jan 29 11:08:15.375: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 11:02:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 11:02:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.3.19 PodIPs:[{IP:10.64.3.19}] StartTime:2023-01-29 10:57:47 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 20s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(0b095899-bdc8-4503-9121-614521f752aa),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 11:01:26 +0000 UTC,FinishedAt:2023-01-29 11:02:55 +0000 UTC,ContainerID:containerd://7aa52ffd2a80100b3b8e372bac3ed9c5fa07e7b33722262869173a446eb64507,}} Ready:false RestartCount:4 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://7aa52ffd2a80100b3b8e372bac3ed9c5fa07e7b33722262869173a446eb64507 Started:0xc000d2153f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 29 11:08:15.421: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: an error on the server ("unknown") has prevented the request from succeeding (get pods volume-snapshot-controller-0): Jan 29 11:08:15.421: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: an error on the server ("unknown") has prevented the request from succeeding (get pods volume-snapshot-controller-0): Jan 29 11:08:15.421: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-47h2m: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:59:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 11:00:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP: PodIPs:[] StartTime:2023-01-29 10:57:47 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:&ContainerStateWaiting{Reason:,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:1 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://47de7bf651c6c66b4beb7067f0cd8237151462cd30542dae17a4415076b6cc9c Started:0xc000d20a9a}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 29 11:08:15.466: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-47h2m/autoscaler, err: an error on the server ("unknown") has prevented the request from succeeding (get pods kube-dns-autoscaler-5f6455f985-47h2m): Jan 29 11:08:15.466: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-47h2m/autoscaler, err: an error on the server ("unknown") has prevented the request from succeeding (get pods kube-dns-autoscaler-5f6455f985-47h2m): Jan 29 11:08:15.466: INFO: Node bootstrap-e2e-minion-group-7sd9 failed reboot test. Jan 29 11:08:15.466: INFO: Executing termination hook on nodes Jan 29 11:08:15.466: INFO: Getting external IP address for bootstrap-e2e-minion-group-3n8r Jan 29 11:08:15.466: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-3n8r(34.145.60.3:22) Jan 29 11:08:15.994: INFO: ssh prow@34.145.60.3:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 11:08:15.994: INFO: ssh prow@34.145.60.3:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 11:03:25 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 11:08:15.994: INFO: ssh prow@34.145.60.3:22: stderr: "" Jan 29 11:08:15.994: INFO: ssh prow@34.145.60.3:22: exit code: 0 Jan 29 11:08:15.994: INFO: Getting external IP address for bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:15.994: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-7sd9(34.168.47.126:22) Jan 29 11:08:16.538: INFO: ssh prow@34.168.47.126:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 11:08:16.538: INFO: ssh prow@34.168.47.126:22: stdout: "" Jan 29 11:08:16.538: INFO: ssh prow@34.168.47.126:22: stderr: "cat: /tmp/drop-inbound.log: No such file or directory\n" Jan 29 11:08:16.538: INFO: ssh prow@34.168.47.126:22: exit code: 1 Jan 29 11:08:16.538: INFO: Error while issuing ssh command: failed running "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log": <nil> (exit code 1, stderr cat: /tmp/drop-inbound.log: No such file or directory ) Jan 29 11:08:16.538: INFO: Getting external IP address for bootstrap-e2e-minion-group-90fc Jan 29 11:08:16.538: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-90fc(34.105.52.142:22) Jan 29 11:08:17.074: INFO: ssh prow@34.105.52.142:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 11:08:17.074: INFO: ssh prow@34.105.52.142:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 11:03:25 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 11:08:17.074: INFO: ssh prow@34.105.52.142:22: stderr: "" Jan 29 11:08:17.075: INFO: ssh prow@34.105.52.142:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 11:08:17.075 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 11:08:17.075 (5m2.258s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 11:08:17.075 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 11:08:17.076 Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-85z9q to bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.639797337s (2.639812936s including waiting) Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Unhealthy: Readiness probe failed: Get "http://10.64.3.17:8181/ready": dial tcp 10.64.3.17:8181: connect: connection refused Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-85z9q_kube-system(a8de34c0-3754-4f31-8c5e-d047238243e1) Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-tbk49 to bootstrap-e2e-minion-group-3n8r Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.071624726s (1.071641644s including waiting) Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container coredns Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Unhealthy: Liveness probe failed: Get "http://10.64.2.4:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": dial tcp 10.64.2.4:8181: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Killing: Container coredns failed liveness probe, will be restarted Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f-tbk49: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-tbk49 Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-85z9q Jan 29 11:08:17.138: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-tbk49 Jan 29 11:08:17.138: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 11:08:17.138: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 11:08:17.138: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 11:08:17.138: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 11:08:17.138: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 11:08:17.138: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 11:08:17.138: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 11:08:17.138: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 11:08:17.138: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 11:08:17.138: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 11:08:17.138: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 11:08:17.138: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 29 11:08:17.138: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 11:08:17.138: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_a845b became leader Jan 29 11:08:17.138: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_a15ba became leader Jan 29 11:08:17.138: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_bd5ff became leader Jan 29 11:08:17.138: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_66f84 became leader Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-b69l8 to bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.678277477s (1.67829785s including waiting) Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-b69l8_kube-system(fae56098-57a4-4079-a8fc-75f48b84c442) Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-h9nwn to bootstrap-e2e-minion-group-3n8r Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 676.129847ms (676.140226ms including waiting) Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Killing: Stopping container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-h9nwn_kube-system(0ac52dd7-f76d-4f28-9d8a-8af2e2676683) Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Unhealthy: Liveness probe failed: Get "http://10.64.2.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-kxtrk to bootstrap-e2e-minion-group-90fc Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 824.620705ms (824.644728ms including waiting) Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container konnectivity-agent Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Liveness probe failed: Get "http://10.64.1.5:8093/healthz": dial tcp 10.64.1.5:8093: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Liveness probe failed: Get "http://10.64.1.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 11:08:17.138: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-b69l8 Jan 29 11:08:17.138: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-kxtrk Jan 29 11:08:17.138: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-h9nwn Jan 29 11:08:17.138: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 11:08:17.138: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 11:08:17.138: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 11:08:17.138: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://127.0.0.1:8133/healthz": dial tcp 127.0.0.1:8133: connect: connection refused Jan 29 11:08:17.138: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 11:08:17.138: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 11:08:17.138: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 11:08:17.138: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 11:08:17.138: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 11:08:17.138: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 11:08:17.138: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 11:08:17.138: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 11:08:17.138: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 11:08:17.138: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:08:17.138: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 11:08:17.138: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 11:08:17.138: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 11:08:17.138: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 11:08:17.138: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_0298c03a-3832-4855-a2af-cf203f6d5229 became leader Jan 29 11:08:17.138: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_b64428ec-4368-4776-ac50-8d5ce5d3c3d7 became leader Jan 29 11:08:17.138: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_93420249-344c-40fd-8874-2327496da9f4 became leader Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-47h2m to bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.618413775s (1.618457503s including waiting) Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container autoscaler Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container autoscaler Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container autoscaler Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-47h2m Jan 29 11:08:17.138: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Killing: Stopping container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-3n8r_kube-system(b5176a347e88e1ff4660b164d3f16916) Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Killing: Stopping container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-3n8r_kube-system(b5176a347e88e1ff4660b164d3f16916) Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7sd9_kube-system(20e39278d9aad8613df3183ed37c4881) Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Killing: Stopping container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-90fc_kube-system(81cae927179b6a5281a90fdaa765ded2) Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container kube-proxy Jan 29 11:08:17.138: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:08:17.138: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 11:08:17.138: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 11:08:17.138: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 11:08:17.138: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 11:08:17.138: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_425c93d9-4e38-470f-b4ba-e1a7e536d147 became leader Jan 29 11:08:17.138: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c84835d1-579f-4af3-bbe9-2d8899072690 became leader Jan 29 11:08:17.138: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_00ed0cb9-b982-4f69-9378-8d53a0626551 became leader Jan 29 11:08:17.138: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_3c66f968-f07b-4c3a-8b08-d3d24ec883af became leader Jan 29 11:08:17.138: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_323c779c-76b3-4e92-ab66-cc172e33c203 became leader Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-fqgll to bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 660.702719ms (660.716002ms including waiting) Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container default-http-backend Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container default-http-backend Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container default-http-backend Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container default-http-backend Jan 29 11:08:17.138: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-fqgll Jan 29 11:08:17.138: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 11:08:17.138: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 11:08:17.138: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 11:08:17.138: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 11:08:17.138: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 11:08:17.138: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 11:08:17.138: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-9whkb to bootstrap-e2e-master Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 778.60477ms (778.615516ms including waiting) Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.176461611s (2.176470734s including waiting) Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-9whkb: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-mwf7j to bootstrap-e2e-minion-group-90fc Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 848.523206ms (848.543058ms including waiting) Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.257052519s (2.25706204s including waiting) Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-mwf7j: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-ppxd4 to bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 812.309842ms (812.385382ms including waiting) Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.092848104s (2.092909933s including waiting) Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-ppxd4: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-zzqvh to bootstrap-e2e-minion-group-3n8r Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 882.001192ms (882.012724ms including waiting) Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.161701332s (2.161712043s including waiting) Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container metadata-proxy Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container prometheus-to-sd-exporter Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1-zzqvh: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-9whkb Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-mwf7j Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-zzqvh Jan 29 11:08:17.138: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-ppxd4 Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-6vkcg to bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.5552357s (2.555248653s including waiting) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container metrics-server Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container metrics-server Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.443599616s (2.443627566s including waiting) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container metrics-server-nanny Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container metrics-server-nanny Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container metrics-server Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container metrics-server-nanny Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c-6vkcg: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-6vkcg Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-6vkcg Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-vfwlz to bootstrap-e2e-minion-group-90fc Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.439956154s (1.43999999s including waiting) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container metrics-server Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container metrics-server Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.063883654s (1.063902072s including waiting) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container metrics-server-nanny Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container metrics-server-nanny Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container metrics-server Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container metrics-server Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container metrics-server-nanny Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container metrics-server-nanny Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Liveness probe failed: Get "https://10.64.1.4:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Killing: Stopping container metrics-server Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} Killing: Stopping container metrics-server-nanny Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-vfwlz_kube-system(43862482-416e-4d81-a91d-a9986c67b520) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {kubelet bootstrap-e2e-minion-group-90fc} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-vfwlz_kube-system(43862482-416e-4d81-a91d-a9986c67b520) Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9-vfwlz: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-vfwlz Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-vfwlz Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 11:08:17.138: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.131380985s (3.131396318s including waiting) Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container volume-snapshot-controller Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container volume-snapshot-controller Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container volume-snapshot-controller Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(0b095899-bdc8-4503-9121-614521f752aa) Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container volume-snapshot-controller Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container volume-snapshot-controller Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container volume-snapshot-controller Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-7sd9} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(0b095899-bdc8-4503-9121-614521f752aa) Jan 29 11:08:17.138: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 11:08:17.138 (64ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 11:08:17.138 Jan 29 11:08:17.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 11:08:17.187 (48ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 11:08:17.187 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 11:08:17.187 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 11:08:17.187 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 11:08:17.189 STEP: Collecting events from namespace "reboot-9358". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 11:08:17.189 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 11:08:17.231 Jan 29 11:08:17.273: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 11:08:17.273: INFO: Jan 29 11:08:17.323: INFO: Logging node info for node bootstrap-e2e-master Jan 29 11:08:17.367: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 13fcdc99-d52b-4449-9d12-c22cc2165092 1478 0 2023-01-29 10:57:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 10:57:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 10:57:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 10:57:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 11:03:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://gce-up-c1-3-g1-4-up-clu-n/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 10:57:47 +0000 UTC,LastTransitionTime:2023-01-29 10:57:47 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 11:03:26 +0000 UTC,LastTransitionTime:2023-01-29 10:57:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 11:03:26 +0000 UTC,LastTransitionTime:2023-01-29 10:57:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 11:03:26 +0000 UTC,LastTransitionTime:2023-01-29 10:57:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 11:03:26 +0000 UTC,LastTransitionTime:2023-01-29 10:57:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.82.171.183,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.gce-up-c1-3-g1-4-up-clu-n.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.gce-up-c1-3-g1-4-up-clu-n.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6e19589febb4d719b2a61e5595f77136,SystemUUID:6e19589f-ebb4-d719-b2a6-1e5595f77136,BootID:29bc0c62-e047-4b19-8209-442f993828f4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-4-gfbf145b31,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 11:08:17.367: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 11:08:17.416: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 11:08:17.492: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 10:57:03 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.492: INFO: Container l7-lb-controller ready: true, restart count 5 Jan 29 11:08:17.492: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 10:56:45 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.492: INFO: Container kube-apiserver ready: true, restart count 1 Jan 29 11:08:17.492: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 10:56:45 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.492: INFO: Container kube-controller-manager ready: true, restart count 4 Jan 29 11:08:17.492: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 10:57:03 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.492: INFO: Container kube-addon-manager ready: true, restart count 1 Jan 29 11:08:17.492: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 10:56:45 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.492: INFO: Container konnectivity-server-container ready: true, restart count 3 Jan 29 11:08:17.492: INFO: metadata-proxy-v0.1-9whkb started at 2023-01-29 10:57:53 +0000 UTC (0+2 container statuses recorded) Jan 29 11:08:17.492: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 11:08:17.492: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 11:08:17.492: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 10:56:44 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.492: INFO: Container kube-scheduler ready: true, restart count 4 Jan 29 11:08:17.492: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 10:56:44 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.492: INFO: Container etcd-container ready: true, restart count 1 Jan 29 11:08:17.492: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 10:56:45 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.492: INFO: Container etcd-container ready: true, restart count 2 Jan 29 11:08:17.777: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 11:08:17.777: INFO: Logging node info for node bootstrap-e2e-minion-group-3n8r Jan 29 11:08:17.822: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-3n8r 2308ea2b-6f43-4767-9035-72a71358d4e8 1764 0 2023-01-29 10:57:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-3n8r kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 10:57:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 11:04:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 11:05:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 11:05:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 11:05:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://gce-up-c1-3-g1-4-up-clu-n/us-west1-b/bootstrap-e2e-minion-group-3n8r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:33 +0000 UTC,LastTransitionTime:2023-01-29 11:00:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 11:05:33 +0000 UTC,LastTransitionTime:2023-01-29 11:00:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 11:05:33 +0000 UTC,LastTransitionTime:2023-01-29 11:00:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 11:05:33 +0000 UTC,LastTransitionTime:2023-01-29 11:00:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 11:05:33 +0000 UTC,LastTransitionTime:2023-01-29 11:00:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:33 +0000 UTC,LastTransitionTime:2023-01-29 11:00:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:33 +0000 UTC,LastTransitionTime:2023-01-29 11:00:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 10:57:47 +0000 UTC,LastTransitionTime:2023-01-29 10:57:47 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:25 +0000 UTC,LastTransitionTime:2023-01-29 11:05:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:25 +0000 UTC,LastTransitionTime:2023-01-29 11:05:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:25 +0000 UTC,LastTransitionTime:2023-01-29 11:05:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 11:05:25 +0000 UTC,LastTransitionTime:2023-01-29 11:05:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.145.60.3,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-3n8r.c.gce-up-c1-3-g1-4-up-clu-n.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-3n8r.c.gce-up-c1-3-g1-4-up-clu-n.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:74dbd23ceb30f7e1b9778ca9043a85b7,SystemUUID:74dbd23c-eb30-f7e1-b977-8ca9043a85b7,BootID:f5724939-2ebb-4c25-bed7-3c19855449d6,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-4-gfbf145b31,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 11:08:17.822: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-3n8r Jan 29 11:08:17.880: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-3n8r Jan 29 11:08:17.988: INFO: coredns-6846b5b5f-tbk49 started at 2023-01-29 10:57:57 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.988: INFO: Container coredns ready: true, restart count 2 Jan 29 11:08:17.988: INFO: kube-proxy-bootstrap-e2e-minion-group-3n8r started at 2023-01-29 10:57:31 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.988: INFO: Container kube-proxy ready: false, restart count 5 Jan 29 11:08:17.988: INFO: metadata-proxy-v0.1-zzqvh started at 2023-01-29 10:57:32 +0000 UTC (0+2 container statuses recorded) Jan 29 11:08:17.988: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 11:08:17.988: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 11:08:17.988: INFO: konnectivity-agent-h9nwn started at 2023-01-29 10:57:47 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:17.988: INFO: Container konnectivity-agent ready: true, restart count 4 Jan 29 11:08:18.175: INFO: Latency metrics for node bootstrap-e2e-minion-group-3n8r Jan 29 11:08:18.175: INFO: Logging node info for node bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:18.220: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-7sd9 71363d69-32a3-46a0-a1ba-c7e4cd4f021b 1778 0 2023-01-29 10:57:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-7sd9 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 10:57:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 10:59:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 11:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 11:05:37 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 11:05:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://gce-up-c1-3-g1-4-up-clu-n/us-west1-b/bootstrap-e2e-minion-group-7sd9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:35 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:35 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:35 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:35 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:35 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:35 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:35 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 10:57:47 +0000 UTC,LastTransitionTime:2023-01-29 10:57:47 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 11:05:37 +0000 UTC,LastTransitionTime:2023-01-29 11:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.47.126,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-7sd9.c.gce-up-c1-3-g1-4-up-clu-n.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-7sd9.c.gce-up-c1-3-g1-4-up-clu-n.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8b45e6d3d45c059d9dc5a25fed23489d,SystemUUID:8b45e6d3-d45c-059d-9dc5-a25fed23489d,BootID:59628d06-8aa4-40bc-8a1a-a94d9cd48de1,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-4-gfbf145b31,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 11:08:18.223: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:18.270: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:18.342: INFO: metadata-proxy-v0.1-ppxd4 started at 2023-01-29 10:57:35 +0000 UTC (0+2 container statuses recorded) Jan 29 11:08:18.342: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 11:08:18.342: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 11:08:18.342: INFO: konnectivity-agent-b69l8 started at 2023-01-29 10:57:47 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:18.342: INFO: Container konnectivity-agent ready: true, restart count 3 Jan 29 11:08:18.342: INFO: kube-proxy-bootstrap-e2e-minion-group-7sd9 started at 2023-01-29 10:57:34 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:18.342: INFO: Container kube-proxy ready: true, restart count 3 Jan 29 11:08:18.342: INFO: l7-default-backend-8549d69d99-fqgll started at 2023-01-29 10:57:47 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:18.342: INFO: Container default-http-backend ready: true, restart count 1 Jan 29 11:08:18.342: INFO: volume-snapshot-controller-0 started at 2023-01-29 10:57:47 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:18.342: INFO: Container volume-snapshot-controller ready: true, restart count 7 Jan 29 11:08:18.342: INFO: coredns-6846b5b5f-85z9q started at 2023-01-29 10:57:47 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:18.342: INFO: Container coredns ready: false, restart count 2 Jan 29 11:08:18.342: INFO: kube-dns-autoscaler-5f6455f985-47h2m started at 2023-01-29 10:57:47 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:18.342: INFO: Container autoscaler ready: false, restart count 1 Jan 29 11:08:18.526: INFO: Latency metrics for node bootstrap-e2e-minion-group-7sd9 Jan 29 11:08:18.526: INFO: Logging node info for node bootstrap-e2e-minion-group-90fc Jan 29 11:08:18.570: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-90fc 7e9be70e-bdfd-46c5-b708-36a329fba312 1765 0 2023-01-29 10:57:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-90fc kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 10:57:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 11:04:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 11:05:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 11:05:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 11:05:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://gce-up-c1-3-g1-4-up-clu-n/us-west1-b/bootstrap-e2e-minion-group-90fc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:34 +0000 UTC,LastTransitionTime:2023-01-29 11:00:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:34 +0000 UTC,LastTransitionTime:2023-01-29 11:00:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 11:05:34 +0000 UTC,LastTransitionTime:2023-01-29 11:00:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 11:05:34 +0000 UTC,LastTransitionTime:2023-01-29 11:00:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 11:05:34 +0000 UTC,LastTransitionTime:2023-01-29 11:00:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 11:05:34 +0000 UTC,LastTransitionTime:2023-01-29 11:00:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 11:05:34 +0000 UTC,LastTransitionTime:2023-01-29 11:00:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 10:57:47 +0000 UTC,LastTransitionTime:2023-01-29 10:57:47 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:26 +0000 UTC,LastTransitionTime:2023-01-29 11:05:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:26 +0000 UTC,LastTransitionTime:2023-01-29 11:05:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 11:05:26 +0000 UTC,LastTransitionTime:2023-01-29 11:05:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 11:05:26 +0000 UTC,LastTransitionTime:2023-01-29 11:05:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.105.52.142,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-90fc.c.gce-up-c1-3-g1-4-up-clu-n.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-90fc.c.gce-up-c1-3-g1-4-up-clu-n.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf512e8691a9af13f2b194f33a3d645,SystemUUID:7cf512e8-691a-9af1-3f2b-194f33a3d645,BootID:34f2816f-fe16-46e6-bcdf-b877a0c0c870,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-4-gfbf145b31,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 11:08:18.570: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-90fc Jan 29 11:08:18.616: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-90fc Jan 29 11:08:18.685: INFO: konnectivity-agent-kxtrk started at 2023-01-29 10:57:47 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:18.685: INFO: Container konnectivity-agent ready: true, restart count 3 Jan 29 11:08:18.685: INFO: metrics-server-v0.5.2-867b8754b9-vfwlz started at 2023-01-29 10:58:11 +0000 UTC (0+2 container statuses recorded) Jan 29 11:08:18.685: INFO: Container metrics-server ready: false, restart count 4 Jan 29 11:08:18.685: INFO: Container metrics-server-nanny ready: false, restart count 5 Jan 29 11:08:18.685: INFO: kube-proxy-bootstrap-e2e-minion-group-90fc started at 2023-01-29 10:57:31 +0000 UTC (0+1 container statuses recorded) Jan 29 11:08:18.685: INFO: Container kube-proxy ready: true, restart count 3 Jan 29 11:08:18.685: INFO: metadata-proxy-v0.1-mwf7j started at 2023-01-29 10:57:32 +0000 UTC (0+2 container statuses recorded) Jan 29 11:08:18.685: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 11:08:18.685: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 11:08:18.875: INFO: Latency metrics for node bootstrap-e2e-minion-group-90fc END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 11:08:18.875 (1.686s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 11:08:18.875 (1.688s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 11:08:18.875 STEP: Destroying namespace "reboot-9358" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 11:08:18.875 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 11:08:18.92 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 11:08:18.921 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 11:08:18.922 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 11:02:16.857 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 11:01:46.736 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 11:01:46.737 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 11:01:46.737 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 11:01:46.737 Jan 29 11:01:46.737: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 11:01:46.738 Jan 29 11:01:46.777: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:01:48.817: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:01:50.817: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:01:52.818: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:01:54.817: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:01:56.819: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:01:58.817: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:00.817: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:02.818: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:04.818: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:06.817: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:08.818: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:10.818: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:12.817: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:14.819: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:16.817: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:16.857: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:16.857: INFO: Unexpected error: <*errors.errorString | 0xc0000d1cd0>: { s: "timed out waiting for the condition", } [FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 11:02:16.857 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 11:02:16.857 (30.121s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 11:02:16.857 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 11:02:16.857 Jan 29 11:02:16.897: INFO: Unexpected error: <*url.Error | 0xc004494060>: { Op: "Get", URL: "https://34.82.171.183/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc002e20050>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004efb7a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 171, 183], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00484e000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://34.82.171.183/api/v1/namespaces/kube-system/events": dial tcp 34.82.171.183:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/29/23 11:02:16.897 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 11:02:16.897 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 11:02:16.897 Jan 29 11:02:16.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 11:02:16.937 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 11:02:16.937 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 11:02:16.937 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 11:02:16.937 (0s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 11:02:16.937 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 11:02:16.937 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 11:02:16.937 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 11:02:16.937 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 11:02:16.937 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 11:02:16.857 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 11:01:46.736 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 11:01:46.737 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 11:01:46.737 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 11:01:46.737 Jan 29 11:01:46.737: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 11:01:46.738 Jan 29 11:01:46.777: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:01:48.817: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:01:50.817: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:01:52.818: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:01:54.817: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:01:56.819: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:01:58.817: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:00.817: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:02.818: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:04.818: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:06.817: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:08.818: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:10.818: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:12.817: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:14.819: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:16.817: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:16.857: INFO: Unexpected error while creating namespace: Post "https://34.82.171.183/api/v1/namespaces": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:02:16.857: INFO: Unexpected error: <*errors.errorString | 0xc0000d1cd0>: { s: "timed out waiting for the condition", } [FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 11:02:16.857 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 11:02:16.857 (30.121s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 11:02:16.857 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 11:02:16.857 Jan 29 11:02:16.897: INFO: Unexpected error: <*url.Error | 0xc004494060>: { Op: "Get", URL: "https://34.82.171.183/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc002e20050>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004efb7a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 171, 183], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00484e000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://34.82.171.183/api/v1/namespaces/kube-system/events": dial tcp 34.82.171.183:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/29/23 11:02:16.897 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 11:02:16.897 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 11:02:16.897 Jan 29 11:02:16.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 11:02:16.937 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 11:02:16.937 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 11:02:16.937 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 11:02:16.937 (0s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 11:02:16.937 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 11:02:16.937 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 11:02:16.937 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 11:02:16.937 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 11:02:16.937 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 11:19:47.674 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 11:16:16.991 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 11:16:16.991 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 11:16:16.991 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 11:16:16.992 Jan 29 11:16:16.992: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 11:16:16.994 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 11:16:17.12 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 11:16:17.201 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 11:16:17.283 (291ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 11:16:17.283 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 11:16:17.283 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 11:16:17.283 Jan 29 11:16:17.379: INFO: Getting bootstrap-e2e-minion-group-3n8r Jan 29 11:16:17.379: INFO: Getting bootstrap-e2e-minion-group-7sd9 Jan 29 11:16:17.425: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-3n8r condition Ready to be true Jan 29 11:16:17.426: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-7sd9 condition Ready to be true Jan 29 11:16:17.428: INFO: Getting bootstrap-e2e-minion-group-90fc Jan 29 11:16:17.470: INFO: Node bootstrap-e2e-minion-group-7sd9 has 4 assigned pods with no liveness probes: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-47h2m kube-proxy-bootstrap-e2e-minion-group-7sd9 metadata-proxy-v0.1-ppxd4] Jan 29 11:16:17.470: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-47h2m kube-proxy-bootstrap-e2e-minion-group-7sd9 metadata-proxy-v0.1-ppxd4] Jan 29 11:16:17.470: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ppxd4" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:16:17.470: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:16:17.470: INFO: Node bootstrap-e2e-minion-group-3n8r has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-3n8r metadata-proxy-v0.1-zzqvh] Jan 29 11:16:17.470: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-3n8r metadata-proxy-v0.1-zzqvh] Jan 29 11:16:17.470: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-7sd9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:16:17.470: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-zzqvh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:16:17.470: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:16:17.471: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-47h2m" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:16:17.472: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-90fc condition Ready to be true Jan 29 11:16:17.551: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 80.157373ms Jan 29 11:16:17.551: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:17.554: INFO: Pod "metadata-proxy-v0.1-zzqvh": Phase="Running", Reason="", readiness=true. Elapsed: 83.839466ms Jan 29 11:16:17.554: INFO: Pod "metadata-proxy-v0.1-zzqvh" satisfied condition "running and ready, or succeeded" Jan 29 11:16:17.554: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 83.646155ms Jan 29 11:16:17.554: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:17.555: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r": Phase="Running", Reason="", readiness=true. Elapsed: 84.087091ms Jan 29 11:16:17.555: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" satisfied condition "running and ready, or succeeded" Jan 29 11:16:17.555: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-3n8r metadata-proxy-v0.1-zzqvh] Jan 29 11:16:17.555: INFO: Getting external IP address for bootstrap-e2e-minion-group-3n8r Jan 29 11:16:17.555: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-3n8r(34.145.60.3:22) Jan 29 11:16:17.555: INFO: Pod "metadata-proxy-v0.1-ppxd4": Phase="Running", Reason="", readiness=true. Elapsed: 84.305718ms Jan 29 11:16:17.555: INFO: Pod "metadata-proxy-v0.1-ppxd4" satisfied condition "running and ready, or succeeded" Jan 29 11:16:17.555: INFO: Node bootstrap-e2e-minion-group-90fc has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-90fc metadata-proxy-v0.1-mwf7j] Jan 29 11:16:17.555: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-90fc metadata-proxy-v0.1-mwf7j] Jan 29 11:16:17.555: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mwf7j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:16:17.555: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-90fc" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:16:17.555: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7sd9": Phase="Running", Reason="", readiness=true. Elapsed: 84.950295ms Jan 29 11:16:17.555: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7sd9" satisfied condition "running and ready, or succeeded" Jan 29 11:16:17.600: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc": Phase="Running", Reason="", readiness=true. Elapsed: 45.485408ms Jan 29 11:16:17.600: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc" satisfied condition "running and ready, or succeeded" Jan 29 11:16:17.600: INFO: Pod "metadata-proxy-v0.1-mwf7j": Phase="Running", Reason="", readiness=true. Elapsed: 45.522648ms Jan 29 11:16:17.600: INFO: Pod "metadata-proxy-v0.1-mwf7j" satisfied condition "running and ready, or succeeded" Jan 29 11:16:17.600: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-90fc metadata-proxy-v0.1-mwf7j] Jan 29 11:16:17.600: INFO: Getting external IP address for bootstrap-e2e-minion-group-90fc Jan 29 11:16:17.600: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-90fc(34.105.52.142:22) Jan 29 11:16:18.109: INFO: ssh prow@34.145.60.3:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 11:16:18.109: INFO: ssh prow@34.145.60.3:22: stdout: "" Jan 29 11:16:18.109: INFO: ssh prow@34.145.60.3:22: stderr: "" Jan 29 11:16:18.109: INFO: ssh prow@34.145.60.3:22: exit code: 0 Jan 29 11:16:18.109: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-3n8r condition Ready to be false Jan 29 11:16:18.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:18.154: INFO: ssh prow@34.105.52.142:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 11:16:18.154: INFO: ssh prow@34.105.52.142:22: stdout: "" Jan 29 11:16:18.154: INFO: ssh prow@34.105.52.142:22: stderr: "" Jan 29 11:16:18.154: INFO: ssh prow@34.105.52.142:22: exit code: 0 Jan 29 11:16:18.154: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-90fc condition Ready to be false Jan 29 11:16:18.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:19.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.124134595s Jan 29 11:16:19.595: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:19.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125331943s Jan 29 11:16:19.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:20.196: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:20.242: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:21.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.122143955s Jan 29 11:16:21.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:21.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126070603s Jan 29 11:16:21.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:22.238: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:22.285: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:23.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.123288317s Jan 29 11:16:23.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:23.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125148622s Jan 29 11:16:23.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:24.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:24.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:25.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.127095653s Jan 29 11:16:25.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:25.600: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129706504s Jan 29 11:16:25.600: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:26.325: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:26.393: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:27.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.122663416s Jan 29 11:16:27.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:27.598: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 10.126736081s Jan 29 11:16:27.598: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:28.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:28.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:29.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.122177142s Jan 29 11:16:29.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:29.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 12.125768052s Jan 29 11:16:29.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:30.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:30.480: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:31.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.122472904s Jan 29 11:16:31.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:31.595: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 14.124707199s Jan 29 11:16:31.595: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:32.454: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:32.532: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:33.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.123158356s Jan 29 11:16:33.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:33.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 16.125944192s Jan 29 11:16:33.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:34.497: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:34.576: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:35.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.123389255s Jan 29 11:16:35.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:35.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 18.12478034s Jan 29 11:16:35.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:36.563: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:36.619: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:37.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.124342789s Jan 29 11:16:37.595: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:37.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 20.126384733s Jan 29 11:16:37.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:38.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:38.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:39.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.122918092s Jan 29 11:16:39.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:39.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 22.125752466s Jan 29 11:16:39.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:40.649: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:40.706: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:41.592: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.121927135s Jan 29 11:16:41.592: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:41.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 24.125779825s Jan 29 11:16:41.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:42.694: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:42.751: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:43.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.122603412s Jan 29 11:16:43.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:43.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 26.126011418s Jan 29 11:16:43.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:44.737: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:44.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:45.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.122416129s Jan 29 11:16:45.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:45.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 28.125679247s Jan 29 11:16:45.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:46.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:46.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:47.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.123529953s Jan 29 11:16:47.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:47.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 30.126452984s Jan 29 11:16:47.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:48.824: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:48.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:49.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.122828984s Jan 29 11:16:49.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:49.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 32.125351554s Jan 29 11:16:49.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:50.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:50.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:51.592: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.122024056s Jan 29 11:16:51.592: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:51.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 34.125324521s Jan 29 11:16:51.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:52.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:52.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:53.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.123649026s Jan 29 11:16:53.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:53.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 36.1249645s Jan 29 11:16:53.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:54.954: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:55.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:55.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.122503729s Jan 29 11:16:55.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:55.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 38.125887647s Jan 29 11:16:55.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:56.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:57.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:57.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.124194381s Jan 29 11:16:57.595: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:57.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 40.125559943s Jan 29 11:16:57.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:59.055: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:59.113: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:59.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.122276967s Jan 29 11:16:59.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:59.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 42.125700699s Jan 29 11:16:59.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:01.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:17:01.252: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:17:01.686: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 44.215217492s Jan 29 11:17:01.686: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:01.686: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.215654169s Jan 29 11:17:01.686: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:03.234: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:17:03.296: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:17:03.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.123872534s Jan 29 11:17:03.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:03.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 46.125012821s Jan 29 11:17:03.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:05.278: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:17:05.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:17:05.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.122853385s Jan 29 11:17:05.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:05.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 48.125630894s Jan 29 11:17:05.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:07.322: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-3n8r condition Ready to be true Jan 29 11:17:07.365: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:07.383: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-90fc condition Ready to be true Jan 29 11:17:07.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:07.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.123645412s Jan 29 11:17:07.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:07.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 50.125654238s Jan 29 11:17:07.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:09.408: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:09.470: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:09.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.124217287s Jan 29 11:17:09.595: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:09.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 52.125363977s Jan 29 11:17:09.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:11.452: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:11.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:11.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.122458756s Jan 29 11:17:11.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:11.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 54.124973504s Jan 29 11:17:11.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:13.496: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:13.559: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:13.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.124163103s Jan 29 11:17:13.595: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:13.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 56.125397897s Jan 29 11:17:13.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:15.541: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:15.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.123392394s Jan 29 11:17:15.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:15.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 58.126026508s Jan 29 11:17:15.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:15.602: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:17.585: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:17.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.122252948s Jan 29 11:17:17.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:17.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.126506108s Jan 29 11:17:17.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:17.648: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:19.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.125028685s Jan 29 11:17:19.595: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:19.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.126239198s Jan 29 11:17:19.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:19.628: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:19.691: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:21.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.1252869s Jan 29 11:17:21.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:21.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.126080242s Jan 29 11:17:21.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:21.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:21.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:23.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.122909592s Jan 29 11:17:23.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:23.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.125694723s Jan 29 11:17:23.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:23.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:23.779: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:25.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.122994405s Jan 29 11:17:25.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:25.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.125608788s Jan 29 11:17:25.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:25.760: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:25.823: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:27.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.125075385s Jan 29 11:17:27.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:27.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.126296684s Jan 29 11:17:27.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:27.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:27.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:29.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.122510454s Jan 29 11:17:29.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:29.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.125542395s Jan 29 11:17:29.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:29.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:29.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:31.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.122495986s Jan 29 11:17:31.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:31.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.125728098s Jan 29 11:17:31.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:31.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:31.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:33.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.122332155s Jan 29 11:17:33.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:33.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.125707919s Jan 29 11:17:33.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:33.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:33.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:35.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.12390369s Jan 29 11:17:35.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:35.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.12548295s Jan 29 11:17:35.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:35.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:36.042: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:37.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.123845514s Jan 29 11:17:37.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:37.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.126085531s Jan 29 11:17:37.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:38.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:38.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:39.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.122440456s Jan 29 11:17:39.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:39.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.1256968s Jan 29 11:17:39.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:40.068: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:40.130: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:41.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.122655449s Jan 29 11:17:41.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:41.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.125606583s Jan 29 11:17:41.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:42.112: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:42.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:43.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.12802598s Jan 29 11:17:43.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:43.600: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.129262121s Jan 29 11:17:43.600: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:44.157: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:44.222: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:45.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.123993832s Jan 29 11:17:45.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:45.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.125953022s Jan 29 11:17:45.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:46.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:46.266: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:47.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.123526079s Jan 29 11:17:47.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:47.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.126211935s Jan 29 11:17:47.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:48.244: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:48.310: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:49.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.124065401s Jan 29 11:17:49.595: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:49.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.12546909s Jan 29 11:17:49.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:50.287: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:50.354: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:51.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.123489298s Jan 29 11:17:51.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:51.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.12517326s Jan 29 11:17:51.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:52.331: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:52.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:53.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.123569668s Jan 29 11:17:53.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:53.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.125348005s Jan 29 11:17:53.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:54.377: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:54.445: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:55.592: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.122092137s Jan 29 11:17:55.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:55.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.125856459s Jan 29 11:17:55.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:56.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:56.488: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:57.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.123221767s Jan 29 11:17:57.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:57.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.125875794s Jan 29 11:17:57.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:58.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:58.532: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:59.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.123549312s Jan 29 11:17:59.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:59.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.125148511s Jan 29 11:17:59.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:00.508: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:00.575: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:01.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.122779842s Jan 29 11:18:01.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:01.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.125086071s Jan 29 11:18:01.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:02.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:02.619: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:03.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.123580298s Jan 29 11:18:03.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:03.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.125792024s Jan 29 11:18:03.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:04.596: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:04.664: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:05.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.122776422s Jan 29 11:18:05.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:05.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.125644309s Jan 29 11:18:05.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:06.640: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:06.707: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:07.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.122869973s Jan 29 11:18:07.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:07.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.125386732s Jan 29 11:18:07.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:08.683: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:08.751: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:09.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.123403749s Jan 29 11:18:09.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:09.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.125807387s Jan 29 11:18:09.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:10.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:10.797: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:11.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.123910535s Jan 29 11:18:11.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:11.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.125257899s Jan 29 11:18:11.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:12.769: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:12.840: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:13.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.123760712s Jan 29 11:18:13.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:13.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.124983126s Jan 29 11:18:13.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:14.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:14.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:15.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.122469464s Jan 29 11:18:15.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:15.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.126185736s Jan 29 11:18:15.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:16.856: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:16.928: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:17.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.123338957s Jan 29 11:18:17.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:17.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.125115204s Jan 29 11:18:17.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:18.899: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:18.972: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:19.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.122190836s Jan 29 11:18:19.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:19.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.125904935s Jan 29 11:18:19.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:20.943: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:21.015: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:21.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.122467773s Jan 29 11:18:21.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:21.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.125568531s Jan 29 11:18:21.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:22.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:23.059: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:23.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.122680502s Jan 29 11:18:23.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:23.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.125666554s Jan 29 11:18:23.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:25.031: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:25.103: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:25.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.122228859s Jan 29 11:18:25.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:25.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.125831769s Jan 29 11:18:25.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:27.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:27.148: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:27.604: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.133954385s Jan 29 11:18:27.604: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:27.615: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.144418626s Jan 29 11:18:27.615: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:29.120: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:29.193: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:29.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.122869618s Jan 29 11:18:29.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:29.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.125525348s Jan 29 11:18:29.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:31.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:31.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:31.592: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.12191269s Jan 29 11:18:31.592: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:31.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.125794626s Jan 29 11:18:31.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:33.207: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:33.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:33.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.122523128s Jan 29 11:18:33.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:33.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.125158456s Jan 29 11:18:33.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:35.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:35.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:35.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.123054685s Jan 29 11:18:35.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:35.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.125628783s Jan 29 11:18:35.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:37.294: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:37.370: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:37.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.122598162s Jan 29 11:18:37.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:37.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.125749302s Jan 29 11:18:37.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:39.341: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:39.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:39.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.12274236s Jan 29 11:18:39.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:39.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.126059376s Jan 29 11:18:39.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:41.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:41.457: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:41.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.122221827s Jan 29 11:18:41.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:41.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.125516145s Jan 29 11:18:41.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:43.428: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:43.501: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:43.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.122769386s Jan 29 11:18:43.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:43.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.124735993s Jan 29 11:18:43.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:45.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:45.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:45.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 2m28.125362516s Jan 29 11:18:45.596: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 11:18:45.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.12630483s Jan 29 11:18:45.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:47.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:47.590: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:47.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.126428173s Jan 29 11:18:47.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:49.579: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:49.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.125902055s Jan 29 11:18:49.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:49.635: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:51.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.125358028s Jan 29 11:18:51.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:51.624: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:51.679: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:53.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.126173362s Jan 29 11:18:53.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:53.668: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:53.723: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:55.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.125851026s Jan 29 11:18:55.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:55.712: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:55.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:57.598: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.127172821s Jan 29 11:18:57.598: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:57.756: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:57.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:59.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.125590593s Jan 29 11:18:59.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:59.800: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:59.855: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:01.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.125787766s Jan 29 11:19:01.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:01.844: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:19:01.899: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:03.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.12546392s Jan 29 11:19:03.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:03.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:19:03.943: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:05.600: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.129571754s Jan 29 11:19:05.600: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:05.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:19:05.986: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:07.598: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.12723735s Jan 29 11:19:07.598: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:08.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:19:08.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:09.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.125322769s Jan 29 11:19:09.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:10.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:19:10.160: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:11.598: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.127277107s Jan 29 11:19:11.598: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:12.106: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:19:12.203: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:13.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.126023078s Jan 29 11:19:13.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:14.155: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:19:14.249: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:15.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.126540262s Jan 29 11:19:15.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:16.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:19:16.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:17.598: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.127538382s Jan 29 11:19:17.598: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:18.331: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-3n8r metadata-proxy-v0.1-zzqvh] Jan 29 11:19:18.331: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-zzqvh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:19:18.331: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:19:18.360: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-90fc metadata-proxy-v0.1-mwf7j] Jan 29 11:19:18.360: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mwf7j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:19:18.360: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-90fc" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:19:18.376: INFO: Pod "metadata-proxy-v0.1-zzqvh": Phase="Running", Reason="", readiness=true. Elapsed: 44.900956ms Jan 29 11:19:18.376: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r": Phase="Running", Reason="", readiness=true. Elapsed: 44.711169ms Jan 29 11:19:18.376: INFO: Pod "metadata-proxy-v0.1-zzqvh" satisfied condition "running and ready, or succeeded" Jan 29 11:19:18.376: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" satisfied condition "running and ready, or succeeded" Jan 29 11:19:18.376: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-3n8r metadata-proxy-v0.1-zzqvh] Jan 29 11:19:18.376: INFO: Reboot successful on node bootstrap-e2e-minion-group-3n8r Jan 29 11:19:18.413: INFO: Pod "metadata-proxy-v0.1-mwf7j": Phase="Running", Reason="", readiness=false. Elapsed: 53.818194ms Jan 29 11:19:18.414: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-mwf7j' on 'bootstrap-e2e-minion-group-90fc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:17:06 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:19:15 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC }] Jan 29 11:19:18.414: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc": Phase="Running", Reason="", readiness=true. Elapsed: 54.107622ms Jan 29 11:19:18.414: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc" satisfied condition "running and ready, or succeeded" Jan 29 11:19:19.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.125653641s Jan 29 11:19:19.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:20.457: INFO: Pod "metadata-proxy-v0.1-mwf7j": Phase="Running", Reason="", readiness=true. Elapsed: 2.097156832s Jan 29 11:19:20.457: INFO: Pod "metadata-proxy-v0.1-mwf7j" satisfied condition "running and ready, or succeeded" Jan 29 11:19:20.457: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-90fc metadata-proxy-v0.1-mwf7j] Jan 29 11:19:20.457: INFO: Reboot successful on node bootstrap-e2e-minion-group-90fc Jan 29 11:19:21.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.126120591s Jan 29 11:19:21.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:23.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.126308642s Jan 29 11:19:23.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:25.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.125684671s Jan 29 11:19:25.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:27.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.126615457s Jan 29 11:19:27.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:29.598: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.126943395s Jan 29 11:19:29.598: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:31.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.125762191s Jan 29 11:19:31.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:33.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.126020008s Jan 29 11:19:33.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:35.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.125573262s Jan 29 11:19:35.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:37.598: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.12678381s Jan 29 11:19:37.598: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:39.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.125868254s Jan 29 11:19:39.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:41.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.126167155s Jan 29 11:19:41.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:43.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.126306933s Jan 29 11:19:43.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:45.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.126135045s Jan 29 11:19:45.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:47.595: INFO: Encountered non-retryable error while getting pod kube-system/kube-dns-autoscaler-5f6455f985-47h2m: Get "https://34.82.171.183/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-47h2m": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:19:47.595: INFO: Pod kube-dns-autoscaler-5f6455f985-47h2m failed to be running and ready, or succeeded. Jan 29 11:19:47.595: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-47h2m kube-proxy-bootstrap-e2e-minion-group-7sd9 metadata-proxy-v0.1-ppxd4] Jan 29 11:19:47.595: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 11:13:32 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 11:13:32 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.3.29 PodIPs:[{IP:10.64.3.29}] StartTime:2023-01-29 10:57:47 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 5m0s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(0b095899-bdc8-4503-9121-614521f752aa),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 11:12:11 +0000 UTC,FinishedAt:2023-01-29 11:13:32 +0000 UTC,ContainerID:containerd://5bc5111d6ad911bb24622bf14e87706433073a168269424f373079a4756826ed,}} Ready:false RestartCount:8 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://5bc5111d6ad911bb24622bf14e87706433073a168269424f373079a4756826ed Started:0xc004e8294f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 29 11:19:47.634: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://34.82.171.183/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 34.82.171.183:443: connect: connection refused: Jan 29 11:19:47.634: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://34.82.171.183/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 34.82.171.183:443: connect: connection refused: Jan 29 11:19:47.634: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-47h2m: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:59:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 11:00:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP: PodIPs:[] StartTime:2023-01-29 10:57:47 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:&ContainerStateWaiting{Reason:,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:1 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://47de7bf651c6c66b4beb7067f0cd8237151462cd30542dae17a4415076b6cc9c Started:0xc00439bf1a}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 29 11:19:47.674: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-47h2m/autoscaler, err: Get "https://34.82.171.183/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-47h2m/log?container=autoscaler&previous=false": dial tcp 34.82.171.183:443: connect: connection refused: Jan 29 11:19:47.674: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-47h2m/autoscaler, err: Get "https://34.82.171.183/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-47h2m/log?container=autoscaler&previous=false": dial tcp 34.82.171.183:443: connect: connection refused: Jan 29 11:19:47.674: INFO: Node bootstrap-e2e-minion-group-7sd9 failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 11:19:47.674 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 11:19:47.674 (3m30.391s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 11:19:47.674 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 11:19:47.674 Jan 29 11:19:47.714: INFO: Unexpected error: <*url.Error | 0xc004e4d3b0>: { Op: "Get", URL: "https://34.82.171.183/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc004c89e50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003e775f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 171, 183], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0006d7240>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://34.82.171.183/api/v1/namespaces/kube-system/events": dial tcp 34.82.171.183:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/29/23 11:19:47.714 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 11:19:47.714 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 11:19:47.714 Jan 29 11:19:47.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 11:19:47.754 (39ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 11:19:47.754 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 11:19:47.754 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 11:19:47.754 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 11:19:47.754 STEP: Collecting events from namespace "reboot-1392". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 11:19:47.754 Jan 29 11:19:47.794: INFO: Unexpected error: failed to list events in namespace "reboot-1392": <*url.Error | 0xc003e77650>: { Op: "Get", URL: "https://34.82.171.183/api/v1/namespaces/reboot-1392/events", Err: <*net.OpError | 0xc003e792c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0042da510>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 171, 183], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003f49360>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 11:19:47.794 (40ms) [FAILED] failed to list events in namespace "reboot-1392": Get "https://34.82.171.183/api/v1/namespaces/reboot-1392/events": dial tcp 34.82.171.183:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/29/23 11:19:47.794 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 11:19:47.794 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 11:19:47.794 STEP: Destroying namespace "reboot-1392" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 11:19:47.794 [FAILED] Couldn't delete ns: "reboot-1392": Delete "https://34.82.171.183/api/v1/namespaces/reboot-1392": dial tcp 34.82.171.183:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.171.183/api/v1/namespaces/reboot-1392", Err:(*net.OpError)(0xc004a7c410)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/29/23 11:19:47.834 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 11:19:47.834 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 11:19:47.834 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 11:19:47.834 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 11:19:47.674 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 11:16:16.991 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 11:16:16.991 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 11:16:16.991 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 11:16:16.992 Jan 29 11:16:16.992: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 11:16:16.994 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 11:16:17.12 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 11:16:17.201 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 11:16:17.283 (291ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 11:16:17.283 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 11:16:17.283 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 11:16:17.283 Jan 29 11:16:17.379: INFO: Getting bootstrap-e2e-minion-group-3n8r Jan 29 11:16:17.379: INFO: Getting bootstrap-e2e-minion-group-7sd9 Jan 29 11:16:17.425: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-3n8r condition Ready to be true Jan 29 11:16:17.426: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-7sd9 condition Ready to be true Jan 29 11:16:17.428: INFO: Getting bootstrap-e2e-minion-group-90fc Jan 29 11:16:17.470: INFO: Node bootstrap-e2e-minion-group-7sd9 has 4 assigned pods with no liveness probes: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-47h2m kube-proxy-bootstrap-e2e-minion-group-7sd9 metadata-proxy-v0.1-ppxd4] Jan 29 11:16:17.470: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-47h2m kube-proxy-bootstrap-e2e-minion-group-7sd9 metadata-proxy-v0.1-ppxd4] Jan 29 11:16:17.470: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ppxd4" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:16:17.470: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:16:17.470: INFO: Node bootstrap-e2e-minion-group-3n8r has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-3n8r metadata-proxy-v0.1-zzqvh] Jan 29 11:16:17.470: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-3n8r metadata-proxy-v0.1-zzqvh] Jan 29 11:16:17.470: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-7sd9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:16:17.470: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-zzqvh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:16:17.470: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:16:17.471: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-47h2m" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:16:17.472: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-90fc condition Ready to be true Jan 29 11:16:17.551: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 80.157373ms Jan 29 11:16:17.551: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:17.554: INFO: Pod "metadata-proxy-v0.1-zzqvh": Phase="Running", Reason="", readiness=true. Elapsed: 83.839466ms Jan 29 11:16:17.554: INFO: Pod "metadata-proxy-v0.1-zzqvh" satisfied condition "running and ready, or succeeded" Jan 29 11:16:17.554: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 83.646155ms Jan 29 11:16:17.554: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:17.555: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r": Phase="Running", Reason="", readiness=true. Elapsed: 84.087091ms Jan 29 11:16:17.555: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" satisfied condition "running and ready, or succeeded" Jan 29 11:16:17.555: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-3n8r metadata-proxy-v0.1-zzqvh] Jan 29 11:16:17.555: INFO: Getting external IP address for bootstrap-e2e-minion-group-3n8r Jan 29 11:16:17.555: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-3n8r(34.145.60.3:22) Jan 29 11:16:17.555: INFO: Pod "metadata-proxy-v0.1-ppxd4": Phase="Running", Reason="", readiness=true. Elapsed: 84.305718ms Jan 29 11:16:17.555: INFO: Pod "metadata-proxy-v0.1-ppxd4" satisfied condition "running and ready, or succeeded" Jan 29 11:16:17.555: INFO: Node bootstrap-e2e-minion-group-90fc has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-90fc metadata-proxy-v0.1-mwf7j] Jan 29 11:16:17.555: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-90fc metadata-proxy-v0.1-mwf7j] Jan 29 11:16:17.555: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mwf7j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:16:17.555: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-90fc" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:16:17.555: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7sd9": Phase="Running", Reason="", readiness=true. Elapsed: 84.950295ms Jan 29 11:16:17.555: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7sd9" satisfied condition "running and ready, or succeeded" Jan 29 11:16:17.600: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc": Phase="Running", Reason="", readiness=true. Elapsed: 45.485408ms Jan 29 11:16:17.600: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc" satisfied condition "running and ready, or succeeded" Jan 29 11:16:17.600: INFO: Pod "metadata-proxy-v0.1-mwf7j": Phase="Running", Reason="", readiness=true. Elapsed: 45.522648ms Jan 29 11:16:17.600: INFO: Pod "metadata-proxy-v0.1-mwf7j" satisfied condition "running and ready, or succeeded" Jan 29 11:16:17.600: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-90fc metadata-proxy-v0.1-mwf7j] Jan 29 11:16:17.600: INFO: Getting external IP address for bootstrap-e2e-minion-group-90fc Jan 29 11:16:17.600: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-90fc(34.105.52.142:22) Jan 29 11:16:18.109: INFO: ssh prow@34.145.60.3:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 11:16:18.109: INFO: ssh prow@34.145.60.3:22: stdout: "" Jan 29 11:16:18.109: INFO: ssh prow@34.145.60.3:22: stderr: "" Jan 29 11:16:18.109: INFO: ssh prow@34.145.60.3:22: exit code: 0 Jan 29 11:16:18.109: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-3n8r condition Ready to be false Jan 29 11:16:18.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:18.154: INFO: ssh prow@34.105.52.142:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 11:16:18.154: INFO: ssh prow@34.105.52.142:22: stdout: "" Jan 29 11:16:18.154: INFO: ssh prow@34.105.52.142:22: stderr: "" Jan 29 11:16:18.154: INFO: ssh prow@34.105.52.142:22: exit code: 0 Jan 29 11:16:18.154: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-90fc condition Ready to be false Jan 29 11:16:18.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:19.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.124134595s Jan 29 11:16:19.595: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:19.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125331943s Jan 29 11:16:19.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:20.196: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:20.242: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:21.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.122143955s Jan 29 11:16:21.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:21.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126070603s Jan 29 11:16:21.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:22.238: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:22.285: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:23.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.123288317s Jan 29 11:16:23.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:23.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125148622s Jan 29 11:16:23.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:24.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:24.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:25.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.127095653s Jan 29 11:16:25.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:25.600: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129706504s Jan 29 11:16:25.600: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:26.325: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:26.393: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:27.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.122663416s Jan 29 11:16:27.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:27.598: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 10.126736081s Jan 29 11:16:27.598: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:28.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:28.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:29.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.122177142s Jan 29 11:16:29.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:29.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 12.125768052s Jan 29 11:16:29.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:30.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:30.480: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:31.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.122472904s Jan 29 11:16:31.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:31.595: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 14.124707199s Jan 29 11:16:31.595: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:32.454: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:32.532: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:33.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.123158356s Jan 29 11:16:33.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:33.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 16.125944192s Jan 29 11:16:33.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:34.497: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:34.576: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:35.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.123389255s Jan 29 11:16:35.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:35.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 18.12478034s Jan 29 11:16:35.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:36.563: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:36.619: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:37.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.124342789s Jan 29 11:16:37.595: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:37.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 20.126384733s Jan 29 11:16:37.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:38.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:38.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:39.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.122918092s Jan 29 11:16:39.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:39.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 22.125752466s Jan 29 11:16:39.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:40.649: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:40.706: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:41.592: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.121927135s Jan 29 11:16:41.592: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:41.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 24.125779825s Jan 29 11:16:41.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:42.694: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:42.751: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:43.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.122603412s Jan 29 11:16:43.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:43.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 26.126011418s Jan 29 11:16:43.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:44.737: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:44.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:45.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.122416129s Jan 29 11:16:45.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:45.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 28.125679247s Jan 29 11:16:45.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:46.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:46.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:47.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.123529953s Jan 29 11:16:47.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:47.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 30.126452984s Jan 29 11:16:47.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:48.824: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:48.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:49.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.122828984s Jan 29 11:16:49.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:49.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 32.125351554s Jan 29 11:16:49.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:50.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:50.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:51.592: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.122024056s Jan 29 11:16:51.592: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:51.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 34.125324521s Jan 29 11:16:51.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:52.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:52.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:53.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.123649026s Jan 29 11:16:53.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:53.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 36.1249645s Jan 29 11:16:53.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:54.954: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:55.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:55.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.122503729s Jan 29 11:16:55.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:55.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 38.125887647s Jan 29 11:16:55.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:56.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:57.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:57.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.124194381s Jan 29 11:16:57.595: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:57.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 40.125559943s Jan 29 11:16:57.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:59.055: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:59.113: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:16:59.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.122276967s Jan 29 11:16:59.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:16:59.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 42.125700699s Jan 29 11:16:59.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:01.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:17:01.252: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:17:01.686: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 44.215217492s Jan 29 11:17:01.686: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:01.686: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.215654169s Jan 29 11:17:01.686: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:03.234: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:17:03.296: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:17:03.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.123872534s Jan 29 11:17:03.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:03.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 46.125012821s Jan 29 11:17:03.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:05.278: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:17:05.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:17:05.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.122853385s Jan 29 11:17:05.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:05.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 48.125630894s Jan 29 11:17:05.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:07.322: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-3n8r condition Ready to be true Jan 29 11:17:07.365: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:07.383: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-90fc condition Ready to be true Jan 29 11:17:07.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:07.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.123645412s Jan 29 11:17:07.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:07.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 50.125654238s Jan 29 11:17:07.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:09.408: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:09.470: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:09.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.124217287s Jan 29 11:17:09.595: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:09.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 52.125363977s Jan 29 11:17:09.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:11.452: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:11.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:11.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.122458756s Jan 29 11:17:11.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:11.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 54.124973504s Jan 29 11:17:11.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:13.496: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:13.559: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:13.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.124163103s Jan 29 11:17:13.595: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:13.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 56.125397897s Jan 29 11:17:13.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:15.541: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:15.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.123392394s Jan 29 11:17:15.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:15.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 58.126026508s Jan 29 11:17:15.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:15.602: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:17.585: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:17.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.122252948s Jan 29 11:17:17.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:17.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.126506108s Jan 29 11:17:17.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:17.648: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:19.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.125028685s Jan 29 11:17:19.595: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:19.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.126239198s Jan 29 11:17:19.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:19.628: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:19.691: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:21.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.1252869s Jan 29 11:17:21.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:21.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.126080242s Jan 29 11:17:21.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:21.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:21.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:17:23.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.122909592s Jan 29 11:17:23.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:23.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.125694723s Jan 29 11:17:23.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:23.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:23.779: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:25.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.122994405s Jan 29 11:17:25.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:25.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.125608788s Jan 29 11:17:25.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:25.760: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:25.823: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:27.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.125075385s Jan 29 11:17:27.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:27.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.126296684s Jan 29 11:17:27.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:27.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:27.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:29.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.122510454s Jan 29 11:17:29.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:29.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.125542395s Jan 29 11:17:29.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:29.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:29.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:31.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.122495986s Jan 29 11:17:31.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:31.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.125728098s Jan 29 11:17:31.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:31.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:31.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:33.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.122332155s Jan 29 11:17:33.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:33.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.125707919s Jan 29 11:17:33.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:33.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:33.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:35.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.12390369s Jan 29 11:17:35.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:35.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.12548295s Jan 29 11:17:35.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:35.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:36.042: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:37.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.123845514s Jan 29 11:17:37.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:37.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.126085531s Jan 29 11:17:37.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:38.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:38.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:39.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.122440456s Jan 29 11:17:39.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:39.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.1256968s Jan 29 11:17:39.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:40.068: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:40.130: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:41.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.122655449s Jan 29 11:17:41.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:41.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.125606583s Jan 29 11:17:41.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:42.112: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:42.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:43.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.12802598s Jan 29 11:17:43.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:43.600: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.129262121s Jan 29 11:17:43.600: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:44.157: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:44.222: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:45.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.123993832s Jan 29 11:17:45.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:45.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.125953022s Jan 29 11:17:45.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:46.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:46.266: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:47.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.123526079s Jan 29 11:17:47.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:47.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.126211935s Jan 29 11:17:47.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:48.244: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:48.310: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:49.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.124065401s Jan 29 11:17:49.595: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:49.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.12546909s Jan 29 11:17:49.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:50.287: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:50.354: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:51.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.123489298s Jan 29 11:17:51.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:51.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.12517326s Jan 29 11:17:51.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:52.331: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:52.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:53.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.123569668s Jan 29 11:17:53.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:53.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.125348005s Jan 29 11:17:53.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:54.377: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:54.445: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:55.592: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.122092137s Jan 29 11:17:55.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:55.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.125856459s Jan 29 11:17:55.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:56.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:56.488: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:57.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.123221767s Jan 29 11:17:57.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:57.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.125875794s Jan 29 11:17:57.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:17:58.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:17:58.532: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:17:59.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.123549312s Jan 29 11:17:59.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:17:59.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.125148511s Jan 29 11:17:59.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:00.508: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:00.575: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:01.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.122779842s Jan 29 11:18:01.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:01.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.125086071s Jan 29 11:18:01.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:02.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:02.619: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:03.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.123580298s Jan 29 11:18:03.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:03.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.125792024s Jan 29 11:18:03.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:04.596: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:04.664: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:05.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.122776422s Jan 29 11:18:05.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:05.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.125644309s Jan 29 11:18:05.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:06.640: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:06.707: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:07.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.122869973s Jan 29 11:18:07.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:07.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.125386732s Jan 29 11:18:07.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:08.683: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:08.751: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:09.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.123403749s Jan 29 11:18:09.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:09.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.125807387s Jan 29 11:18:09.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:10.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:10.797: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:11.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.123910535s Jan 29 11:18:11.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:11.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.125257899s Jan 29 11:18:11.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:12.769: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:12.840: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:13.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.123760712s Jan 29 11:18:13.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:13.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.124983126s Jan 29 11:18:13.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:14.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:14.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:15.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.122469464s Jan 29 11:18:15.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:15.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.126185736s Jan 29 11:18:15.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:16.856: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:16.928: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:17.594: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.123338957s Jan 29 11:18:17.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:17.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.125115204s Jan 29 11:18:17.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:18.899: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:18.972: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:19.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.122190836s Jan 29 11:18:19.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:19.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.125904935s Jan 29 11:18:19.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:20.943: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:21.015: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:21.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.122467773s Jan 29 11:18:21.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:21.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.125568531s Jan 29 11:18:21.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:22.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:23.059: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:23.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.122680502s Jan 29 11:18:23.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:23.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.125666554s Jan 29 11:18:23.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:25.031: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:25.103: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:25.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.122228859s Jan 29 11:18:25.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:25.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.125831769s Jan 29 11:18:25.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:27.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:27.148: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:27.604: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.133954385s Jan 29 11:18:27.604: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:27.615: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.144418626s Jan 29 11:18:27.615: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:29.120: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:29.193: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:29.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.122869618s Jan 29 11:18:29.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:29.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.125525348s Jan 29 11:18:29.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:31.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:31.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:31.592: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.12191269s Jan 29 11:18:31.592: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:31.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.125794626s Jan 29 11:18:31.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:33.207: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:33.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:33.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.122523128s Jan 29 11:18:33.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:33.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.125158456s Jan 29 11:18:33.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:35.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:35.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:35.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.123054685s Jan 29 11:18:35.594: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:35.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.125628783s Jan 29 11:18:35.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:37.294: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:37.370: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:37.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.122598162s Jan 29 11:18:37.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:37.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.125749302s Jan 29 11:18:37.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:39.341: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:39.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:39.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.12274236s Jan 29 11:18:39.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:39.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.126059376s Jan 29 11:18:39.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:41.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:41.457: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:41.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.122221827s Jan 29 11:18:41.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:41.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.125516145s Jan 29 11:18:41.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:43.428: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:43.501: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:43.593: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.122769386s Jan 29 11:18:43.593: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:13:32 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:18:43.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.124735993s Jan 29 11:18:43.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:45.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:45.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:45.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 2m28.125362516s Jan 29 11:18:45.596: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 11:18:45.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.12630483s Jan 29 11:18:45.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:47.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:47.590: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:47.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.126428173s Jan 29 11:18:47.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:49.579: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:49.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.125902055s Jan 29 11:18:49.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:49.635: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:51.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.125358028s Jan 29 11:18:51.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:51.624: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:51.679: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:53.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.126173362s Jan 29 11:18:53.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:53.668: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:53.723: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:55.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.125851026s Jan 29 11:18:55.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:55.712: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:55.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:57.598: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.127172821s Jan 29 11:18:57.598: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:57.756: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:57.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:18:59.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.125590593s Jan 29 11:18:59.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:18:59.800: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:18:59.855: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:01.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.125787766s Jan 29 11:19:01.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:01.844: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:19:01.899: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:03.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.12546392s Jan 29 11:19:03.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:03.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:19:03.943: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:05.600: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.129571754s Jan 29 11:19:05.600: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:05.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:19:05.986: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:07.598: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.12723735s Jan 29 11:19:07.598: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:08.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:19:08.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:09.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.125322769s Jan 29 11:19:09.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:10.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:19:10.160: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:11.598: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.127277107s Jan 29 11:19:11.598: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:12.106: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:19:12.203: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:13.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.126023078s Jan 29 11:19:13.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:14.155: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:19:14.249: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:17:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:15.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.126540262s Jan 29 11:19:15.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:16.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:11 +0000 UTC}]. Failure Jan 29 11:19:16.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 11:17:21 +0000 UTC}]. Failure Jan 29 11:19:17.598: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.127538382s Jan 29 11:19:17.598: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:18.331: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-3n8r metadata-proxy-v0.1-zzqvh] Jan 29 11:19:18.331: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-zzqvh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:19:18.331: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:19:18.360: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-90fc metadata-proxy-v0.1-mwf7j] Jan 29 11:19:18.360: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mwf7j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:19:18.360: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-90fc" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:19:18.376: INFO: Pod "metadata-proxy-v0.1-zzqvh": Phase="Running", Reason="", readiness=true. Elapsed: 44.900956ms Jan 29 11:19:18.376: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r": Phase="Running", Reason="", readiness=true. Elapsed: 44.711169ms Jan 29 11:19:18.376: INFO: Pod "metadata-proxy-v0.1-zzqvh" satisfied condition "running and ready, or succeeded" Jan 29 11:19:18.376: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" satisfied condition "running and ready, or succeeded" Jan 29 11:19:18.376: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-3n8r metadata-proxy-v0.1-zzqvh] Jan 29 11:19:18.376: INFO: Reboot successful on node bootstrap-e2e-minion-group-3n8r Jan 29 11:19:18.413: INFO: Pod "metadata-proxy-v0.1-mwf7j": Phase="Running", Reason="", readiness=false. Elapsed: 53.818194ms Jan 29 11:19:18.414: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-mwf7j' on 'bootstrap-e2e-minion-group-90fc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:17:06 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:19:15 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC }] Jan 29 11:19:18.414: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc": Phase="Running", Reason="", readiness=true. Elapsed: 54.107622ms Jan 29 11:19:18.414: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc" satisfied condition "running and ready, or succeeded" Jan 29 11:19:19.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.125653641s Jan 29 11:19:19.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:20.457: INFO: Pod "metadata-proxy-v0.1-mwf7j": Phase="Running", Reason="", readiness=true. Elapsed: 2.097156832s Jan 29 11:19:20.457: INFO: Pod "metadata-proxy-v0.1-mwf7j" satisfied condition "running and ready, or succeeded" Jan 29 11:19:20.457: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-90fc metadata-proxy-v0.1-mwf7j] Jan 29 11:19:20.457: INFO: Reboot successful on node bootstrap-e2e-minion-group-90fc Jan 29 11:19:21.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.126120591s Jan 29 11:19:21.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:23.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.126308642s Jan 29 11:19:23.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:25.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.125684671s Jan 29 11:19:25.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:27.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.126615457s Jan 29 11:19:27.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:29.598: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.126943395s Jan 29 11:19:29.598: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:31.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.125762191s Jan 29 11:19:31.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:33.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.126020008s Jan 29 11:19:33.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:35.596: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.125573262s Jan 29 11:19:35.596: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:37.598: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.12678381s Jan 29 11:19:37.598: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:39.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.125868254s Jan 29 11:19:39.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:41.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.126167155s Jan 29 11:19:41.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:43.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.126306933s Jan 29 11:19:43.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:45.597: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.126135045s Jan 29 11:19:45.597: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:19:47.595: INFO: Encountered non-retryable error while getting pod kube-system/kube-dns-autoscaler-5f6455f985-47h2m: Get "https://34.82.171.183/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-47h2m": dial tcp 34.82.171.183:443: connect: connection refused Jan 29 11:19:47.595: INFO: Pod kube-dns-autoscaler-5f6455f985-47h2m failed to be running and ready, or succeeded. Jan 29 11:19:47.595: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-47h2m kube-proxy-bootstrap-e2e-minion-group-7sd9 metadata-proxy-v0.1-ppxd4] Jan 29 11:19:47.595: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 11:13:32 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 11:13:32 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.3.29 PodIPs:[{IP:10.64.3.29}] StartTime:2023-01-29 10:57:47 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 5m0s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(0b095899-bdc8-4503-9121-614521f752aa),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 11:12:11 +0000 UTC,FinishedAt:2023-01-29 11:13:32 +0000 UTC,ContainerID:containerd://5bc5111d6ad911bb24622bf14e87706433073a168269424f373079a4756826ed,}} Ready:false RestartCount:8 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://5bc5111d6ad911bb24622bf14e87706433073a168269424f373079a4756826ed Started:0xc004e8294f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 29 11:19:47.634: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://34.82.171.183/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 34.82.171.183:443: connect: connection refused: Jan 29 11:19:47.634: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://34.82.171.183/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 34.82.171.183:443: connect: connection refused: Jan 29 11:19:47.634: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-47h2m: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:59:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 11:00:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP: PodIPs:[] StartTime:2023-01-29 10:57:47 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:&ContainerStateWaiting{Reason:,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:1 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://47de7bf651c6c66b4beb7067f0cd8237151462cd30542dae17a4415076b6cc9c Started:0xc00439bf1a}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 29 11:19:47.674: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-47h2m/autoscaler, err: Get "https://34.82.171.183/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-47h2m/log?container=autoscaler&previous=false": dial tcp 34.82.171.183:443: connect: connection refused: Jan 29 11:19:47.674: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-47h2m/autoscaler, err: Get "https://34.82.171.183/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-47h2m/log?container=autoscaler&previous=false": dial tcp 34.82.171.183:443: connect: connection refused: Jan 29 11:19:47.674: INFO: Node bootstrap-e2e-minion-group-7sd9 failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 11:19:47.674 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 11:19:47.674 (3m30.391s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 11:19:47.674 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 11:19:47.674 Jan 29 11:19:47.714: INFO: Unexpected error: <*url.Error | 0xc004e4d3b0>: { Op: "Get", URL: "https://34.82.171.183/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc004c89e50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003e775f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 171, 183], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0006d7240>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://34.82.171.183/api/v1/namespaces/kube-system/events": dial tcp 34.82.171.183:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/29/23 11:19:47.714 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 11:19:47.714 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 11:19:47.714 Jan 29 11:19:47.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 11:19:47.754 (39ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 11:19:47.754 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 11:19:47.754 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 11:19:47.754 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 11:19:47.754 STEP: Collecting events from namespace "reboot-1392". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 11:19:47.754 Jan 29 11:19:47.794: INFO: Unexpected error: failed to list events in namespace "reboot-1392": <*url.Error | 0xc003e77650>: { Op: "Get", URL: "https://34.82.171.183/api/v1/namespaces/reboot-1392/events", Err: <*net.OpError | 0xc003e792c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0042da510>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 171, 183], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003f49360>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 11:19:47.794 (40ms) [FAILED] failed to list events in namespace "reboot-1392": Get "https://34.82.171.183/api/v1/namespaces/reboot-1392/events": dial tcp 34.82.171.183:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/29/23 11:19:47.794 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 11:19:47.794 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 11:19:47.794 STEP: Destroying namespace "reboot-1392" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 11:19:47.794 [FAILED] Couldn't delete ns: "reboot-1392": Delete "https://34.82.171.183/api/v1/namespaces/reboot-1392": dial tcp 34.82.171.183:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.171.183/api/v1/namespaces/reboot-1392", Err:(*net.OpError)(0xc004a7c410)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/29/23 11:19:47.834 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 11:19:47.834 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 11:19:47.834 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 11:19:47.834 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 11:16:15.285
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 11:11:14.489 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 11:11:14.489 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 11:11:14.489 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 11:11:14.489 Jan 29 11:11:14.489: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 11:11:14.491 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 11:11:14.623 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 11:11:14.706 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 11:11:14.789 (301ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 11:11:14.789 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 11:11:14.789 (0s) > Enter [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/29/23 11:11:14.789 Jan 29 11:11:14.938: INFO: Getting bootstrap-e2e-minion-group-3n8r Jan 29 11:11:14.938: INFO: Getting bootstrap-e2e-minion-group-90fc Jan 29 11:11:14.938: INFO: Getting bootstrap-e2e-minion-group-7sd9 Jan 29 11:11:14.985: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-7sd9 condition Ready to be true Jan 29 11:11:14.985: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-3n8r condition Ready to be true Jan 29 11:11:14.986: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-90fc condition Ready to be true Jan 29 11:11:15.032: INFO: Node bootstrap-e2e-minion-group-90fc has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-90fc metadata-proxy-v0.1-mwf7j] Jan 29 11:11:15.032: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-90fc metadata-proxy-v0.1-mwf7j] Jan 29 11:11:15.032: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mwf7j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:11:15.032: INFO: Node bootstrap-e2e-minion-group-3n8r has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-3n8r metadata-proxy-v0.1-zzqvh] Jan 29 11:11:15.032: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-3n8r metadata-proxy-v0.1-zzqvh] Jan 29 11:11:15.032: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-zzqvh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:11:15.032: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:11:15.032: INFO: Node bootstrap-e2e-minion-group-7sd9 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-47h2m kube-proxy-bootstrap-e2e-minion-group-7sd9 metadata-proxy-v0.1-ppxd4 volume-snapshot-controller-0] Jan 29 11:11:15.032: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-47h2m kube-proxy-bootstrap-e2e-minion-group-7sd9 metadata-proxy-v0.1-ppxd4 volume-snapshot-controller-0] Jan 29 11:11:15.032: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-90fc" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:11:15.032: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:11:15.032: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-7sd9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:11:15.032: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-47h2m" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:11:15.032: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ppxd4" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:11:15.082: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 50.082597ms Jan 29 11:11:15.082: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:15.082: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7sd9": Phase="Running", Reason="", readiness=true. Elapsed: 50.200183ms Jan 29 11:11:15.082: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7sd9" satisfied condition "running and ready, or succeeded" Jan 29 11:11:15.082: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.425869ms Jan 29 11:11:15.082: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:15.083: INFO: Pod "metadata-proxy-v0.1-ppxd4": Phase="Running", Reason="", readiness=true. Elapsed: 50.440042ms Jan 29 11:11:15.083: INFO: Pod "metadata-proxy-v0.1-ppxd4" satisfied condition "running and ready, or succeeded" Jan 29 11:11:15.083: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc": Phase="Running", Reason="", readiness=true. Elapsed: 50.632454ms Jan 29 11:11:15.083: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc" satisfied condition "running and ready, or succeeded" Jan 29 11:11:15.083: INFO: Pod "metadata-proxy-v0.1-mwf7j": Phase="Running", Reason="", readiness=true. Elapsed: 50.811275ms Jan 29 11:11:15.083: INFO: Pod "metadata-proxy-v0.1-mwf7j" satisfied condition "running and ready, or succeeded" Jan 29 11:11:15.083: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-90fc metadata-proxy-v0.1-mwf7j] Jan 29 11:11:15.083: INFO: Getting external IP address for bootstrap-e2e-minion-group-90fc Jan 29 11:11:15.083: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-90fc(34.105.52.142:22) Jan 29 11:11:15.083: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r": Phase="Running", Reason="", readiness=true. Elapsed: 50.821607ms Jan 29 11:11:15.083: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" satisfied condition "running and ready, or succeeded" Jan 29 11:11:15.083: INFO: Pod "metadata-proxy-v0.1-zzqvh": Phase="Running", Reason="", readiness=true. Elapsed: 50.992754ms Jan 29 11:11:15.083: INFO: Pod "metadata-proxy-v0.1-zzqvh" satisfied condition "running and ready, or succeeded" Jan 29 11:11:15.083: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-3n8r metadata-proxy-v0.1-zzqvh] Jan 29 11:11:15.083: INFO: Getting external IP address for bootstrap-e2e-minion-group-3n8r Jan 29 11:11:15.083: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-3n8r(34.145.60.3:22) Jan 29 11:11:15.611: INFO: ssh prow@34.145.60.3:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 11:11:15.611: INFO: ssh prow@34.145.60.3:22: stdout: "" Jan 29 11:11:15.611: INFO: ssh prow@34.145.60.3:22: stderr: "" Jan 29 11:11:15.611: INFO: ssh prow@34.145.60.3:22: exit code: 0 Jan 29 11:11:15.611: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-3n8r condition Ready to be false Jan 29 11:11:15.616: INFO: ssh prow@34.105.52.142:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 11:11:15.616: INFO: ssh prow@34.105.52.142:22: stdout: "" Jan 29 11:11:15.616: INFO: ssh prow@34.105.52.142:22: stderr: "" Jan 29 11:11:15.616: INFO: ssh prow@34.105.52.142:22: exit code: 0 Jan 29 11:11:15.616: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-90fc condition Ready to be false Jan 29 11:11:15.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:15.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:17.126: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.094373345s Jan 29 11:11:17.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094274113s Jan 29 11:11:17.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:17.126: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:17.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:17.707: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:19.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094036705s Jan 29 11:11:19.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:19.127: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.094725635s Jan 29 11:11:19.127: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:19.757: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:19.757: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:21.128: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.096097692s Jan 29 11:11:21.128: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095956125s Jan 29 11:11:21.128: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:21.128: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:21.804: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:21.805: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:23.129: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096777061s Jan 29 11:11:23.129: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:23.130: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.098355386s Jan 29 11:11:23.130: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:23.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:23.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:25.127: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 10.094476379s Jan 29 11:11:25.127: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:25.128: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.09602426s Jan 29 11:11:25.128: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:25.898: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:25.898: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:27.127: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.095144479s Jan 29 11:11:27.127: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:27.128: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 12.095618145s Jan 29 11:11:27.128: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:27.945: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:27.945: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:29.127: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.094699621s Jan 29 11:11:29.127: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:29.127: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 14.094770594s Jan 29 11:11:29.127: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:29.990: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:29.990: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:31.127: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.094623684s Jan 29 11:11:31.127: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:31.127: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 16.094795463s Jan 29 11:11:31.127: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:32.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:32.037: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:33.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 18.093361166s Jan 29 11:11:33.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:33.127: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.094945521s Jan 29 11:11:33.127: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:34.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:34.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:35.127: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 20.09496271s Jan 29 11:11:35.127: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:35.129: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.096640701s Jan 29 11:11:35.129: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:36.126: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:36.128: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:37.130: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 22.097948732s Jan 29 11:11:37.130: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:37.131: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.099077546s Jan 29 11:11:37.131: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:38.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:38.172: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:39.127: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 24.094818135s Jan 29 11:11:39.127: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:39.129: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.096606003s Jan 29 11:11:39.129: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:40.223: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:40.223: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:41.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 26.093911482s Jan 29 11:11:41.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:41.127: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.095302206s Jan 29 11:11:41.127: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:42.268: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:42.268: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:43.127: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 28.094912316s Jan 29 11:11:43.127: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:43.127: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.095159919s Jan 29 11:11:43.127: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:44.312: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:44.314: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:45.128: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 30.095494919s Jan 29 11:11:45.128: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:45.128: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.09564684s Jan 29 11:11:45.128: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:46.359: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:46.360: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:47.154: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.121692946s Jan 29 11:11:47.154: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:47.157: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 32.124597616s Jan 29 11:11:47.157: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:48.405: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:48.405: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:49.131: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 34.09876006s Jan 29 11:11:49.131: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:49.131: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.098997028s Jan 29 11:11:49.131: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:50.452: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:50.452: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:51.127: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 36.094899578s Jan 29 11:11:51.127: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:51.129: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.096928574s Jan 29 11:11:51.129: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:52.497: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:52.497: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:53.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 38.093780884s Jan 29 11:11:53.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:53.128: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.095967327s Jan 29 11:11:53.128: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:54.544: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:54.544: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:55.128: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 40.095862498s Jan 29 11:11:55.128: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:55.129: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.09742165s Jan 29 11:11:55.129: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:56.589: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:56.590: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:57.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 42.09359091s Jan 29 11:11:57.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:57.128: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.095634122s Jan 29 11:11:57.128: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:11:58.633: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:58.635: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:11:59.127: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 44.095092313s Jan 29 11:11:59.127: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:11:59.128: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.096462717s Jan 29 11:11:59.129: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:12:00.677: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-3n8r condition Ready to be true Jan 29 11:12:00.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:12:00.721: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:12:01.127: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.094737397s Jan 29 11:12:01.127: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 46.094648444s Jan 29 11:12:01.127: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:12:01.127: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:12:02.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 11:12:02.766: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:12:03.127: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 48.095360216s Jan 29 11:12:03.127: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.09549211s Jan 29 11:12:03.128: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:12:03.128: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:12:04.791: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-90fc condition Ready to be true Jan 29 11:12:04.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:12:04.869: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:12:48.138: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:12:48.138: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:12:48.829: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m33.796994861s Jan 29 11:12:48.829: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:12:48.829: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m33.797320086s Jan 29 11:12:48.829: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-7sd9' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:09:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:47 +0000 UTC }] Jan 29 11:12:49.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.093175632s Jan 29 11:12:49.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:12:49.128: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 1m34.095553297s Jan 29 11:12:49.128: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 11:12:50.185: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:12:50.185: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:12:51.127: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.094765863s Jan 29 11:12:51.127: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:12:52.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:12:52.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:12:53.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.092713693s Jan 29 11:12:53.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:12:54.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:12:54.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:12:55.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.093629995s Jan 29 11:12:55.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:12:56.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:12:56.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:12:57.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.092680877s Jan 29 11:12:57.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:12:58.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:12:58.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:12:59.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.092264112s Jan 29 11:12:59.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:00.419: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:00.419: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:01.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.093427038s Jan 29 11:13:01.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:02.466: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:02.466: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:03.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.093382212s Jan 29 11:13:03.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:04.512: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:04.512: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:05.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.093486335s Jan 29 11:13:05.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:06.560: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:06.560: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:07.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.092836541s Jan 29 11:13:07.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:08.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:08.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:09.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.0930989s Jan 29 11:13:09.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:10.653: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:10.653: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:11.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.093575823s Jan 29 11:13:11.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:12.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:12.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:13.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.092766326s Jan 29 11:13:13.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:14.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:14.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:15.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.092678526s Jan 29 11:13:15.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:16.793: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:16.793: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:17.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.094211598s Jan 29 11:13:17.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:18.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:18.842: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:19.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.093459581s Jan 29 11:13:19.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:20.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:20.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:21.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.092413272s Jan 29 11:13:21.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:22.934: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:22.934: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:23.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.093372726s Jan 29 11:13:23.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:24.978: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:24.978: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:25.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.092774656s Jan 29 11:13:25.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:27.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:27.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:27.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.092143208s Jan 29 11:13:27.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:29.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:29.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:29.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.09268311s Jan 29 11:13:29.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:31.117: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:31.117: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:31.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.093096178s Jan 29 11:13:31.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:33.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.092829163s Jan 29 11:13:33.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:33.163: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:33.163: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:35.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.091992969s Jan 29 11:13:35.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:35.210: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:35.210: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:37.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.092610203s Jan 29 11:13:37.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:37.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:37.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:39.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.093208521s Jan 29 11:13:39.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:39.304: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:39.304: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:41.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.093350493s Jan 29 11:13:41.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:41.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:41.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:43.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.092407354s Jan 29 11:13:43.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:43.397: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:43.397: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:45.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.093439282s Jan 29 11:13:45.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:45.441: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:45.442: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:47.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.092311312s Jan 29 11:13:47.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:47.486: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:47.486: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:49.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.092158721s Jan 29 11:13:49.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:49.531: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:49.531: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:51.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.092238025s Jan 29 11:13:51.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:51.577: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:51.577: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:53.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.091873586s Jan 29 11:13:53.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:53.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:53.624: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:55.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.092659344s Jan 29 11:13:55.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:55.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:55.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:57.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.093461568s Jan 29 11:13:57.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:57.718: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:13:57.718: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:59.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.093651491s Jan 29 11:13:59.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:13:59.764: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:13:59.764: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:01.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.092992153s Jan 29 11:14:01.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:01.809: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:01.809: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:03.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.092025113s Jan 29 11:14:03.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:03.856: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:03.856: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:05.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.092247953s Jan 29 11:14:05.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:05.902: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:05.902: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:07.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.092198855s Jan 29 11:14:07.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:07.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:07.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:09.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.092982517s Jan 29 11:14:09.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:09.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:09.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:11.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.092778097s Jan 29 11:14:11.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:12.042: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:12.042: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:13.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.092850497s Jan 29 11:14:13.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:14.088: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:14.088: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:15.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.092117316s Jan 29 11:14:15.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:16.135: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:16.135: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:17.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.092072215s Jan 29 11:14:17.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:18.181: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:18.181: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:19.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.093073496s Jan 29 11:14:19.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:20.227: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:20.227: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:21.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.092428257s Jan 29 11:14:21.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:22.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:22.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:23.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.094143434s Jan 29 11:14:23.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:24.320: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:24.320: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:25.147: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.114864347s Jan 29 11:14:25.147: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:26.367: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:26.367: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:27.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.093333491s Jan 29 11:14:27.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:28.414: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:28.414: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:29.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.092323139s Jan 29 11:14:29.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:30.459: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:30.460: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:31.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.092905534s Jan 29 11:14:31.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:32.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:32.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:33.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.093187857s Jan 29 11:14:33.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:34.576: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:34.578: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:35.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.091954758s Jan 29 11:14:35.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:36.618: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:36.622: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:37.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.092476176s Jan 29 11:14:37.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:38.661: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:38.665: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:39.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.094208522s Jan 29 11:14:39.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:40.704: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:40.708: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:41.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.092912812s Jan 29 11:14:41.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:42.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:42.752: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:43.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.092024601s Jan 29 11:14:43.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:44.793: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:44.805: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:45.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.092938462s Jan 29 11:14:45.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:46.838: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:46.848: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:47.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.093290172s Jan 29 11:14:47.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:48.882: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:48.892: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:49.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.093292658s Jan 29 11:14:49.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:50.926: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:50.936: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:51.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.093022399s Jan 29 11:14:51.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:52.969: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:52.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:53.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.091963853s Jan 29 11:14:53.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:55.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:55.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:55.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.092549627s Jan 29 11:14:55.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:57.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:57.067: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:57.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.091972962s Jan 29 11:14:57.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:14:59.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:14:59.111: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:14:59.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.093298209s Jan 29 11:14:59.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:01.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.092133166s Jan 29 11:15:01.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:01.143: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:15:01.155: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:15:03.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.09336074s Jan 29 11:15:03.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:03.188: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:15:03.199: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:15:05.127: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.094692758s Jan 29 11:15:05.127: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:05.231: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:15:05.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:15:07.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.09329637s Jan 29 11:15:07.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:07.275: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:15:07.289: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:15:09.127: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.095246312s Jan 29 11:15:09.127: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:09.319: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:15:09.334: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:15:11.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.094039985s Jan 29 11:15:11.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:11.364: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:15:11.378: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:15:13.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.093178283s Jan 29 11:15:13.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:13.408: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:15:13.421: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:15:15.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.093419106s Jan 29 11:15:15.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:15.452: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:15:15.464: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:15:17.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.09205813s Jan 29 11:15:17.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:17.496: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:15:17.532: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:15:19.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.092636121s Jan 29 11:15:19.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:19.539: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:15:19.576: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:15:21.128: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.095416283s Jan 29 11:15:21.128: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:21.582: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:15:21.619: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:15:23.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.092470676s Jan 29 11:15:23.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:23.627: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:15:23.663: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:15:25.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.092951973s Jan 29 11:15:25.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:25.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:15:25.724: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:15:27.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.092342556s Jan 29 11:15:27.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:27.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:15:27.768: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:15:29.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.092452826s Jan 29 11:15:29.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:29.758: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 11:11:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 11:12:04 +0000 UTC}]. Failure Jan 29 11:15:29.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 11:15:31.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.092188319s Jan 29 11:15:31.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:31.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-3n8r is false instead of true. Reason: KubeletNotReady, message: [PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized] Jan 29 11:15:31.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 11:15:31 +0000 UTC} {node.kubernetes.io/not-ready NoSchedule 2023-01-29 11:15:31 +0000 UTC}]. Failure Jan 29 11:15:33.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.092239746s Jan 29 11:15:33.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:33.850: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-3n8r metadata-proxy-v0.1-zzqvh] Jan 29 11:15:33.850: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-zzqvh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:15:33.850: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:15:33.894: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r": Phase="Running", Reason="", readiness=false. Elapsed: 43.576768ms Jan 29 11:15:33.894: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-3n8r' on 'bootstrap-e2e-minion-group-3n8r' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:11:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:11:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC }] Jan 29 11:15:33.894: INFO: Pod "metadata-proxy-v0.1-zzqvh": Phase="Running", Reason="", readiness=false. Elapsed: 43.729446ms Jan 29 11:15:33.894: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-zzqvh' on 'bootstrap-e2e-minion-group-3n8r' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:11:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:00:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC }] Jan 29 11:15:33.902: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 11:15:31 +0000 UTC}]. Failure Jan 29 11:15:35.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.092691534s Jan 29 11:15:35.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:35.938: INFO: Pod "metadata-proxy-v0.1-zzqvh": Phase="Running", Reason="", readiness=true. Elapsed: 2.08823873s Jan 29 11:15:35.938: INFO: Pod "metadata-proxy-v0.1-zzqvh" satisfied condition "running and ready, or succeeded" Jan 29 11:15:35.938: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r": Phase="Running", Reason="", readiness=false. Elapsed: 2.088223403s Jan 29 11:15:35.938: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-3n8r' on 'bootstrap-e2e-minion-group-3n8r' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:15:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 11:15:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 10:57:31 +0000 UTC }] Jan 29 11:15:35.944: INFO: Condition Ready of node bootstrap-e2e-minion-group-90fc is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 11:15:31 +0000 UTC}]. Failure Jan 29 11:15:37.127: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.095187599s Jan 29 11:15:37.127: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:37.938: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r": Phase="Running", Reason="", readiness=true. Elapsed: 4.08843228s Jan 29 11:15:37.938: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-3n8r" satisfied condition "running and ready, or succeeded" Jan 29 11:15:37.938: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-3n8r metadata-proxy-v0.1-zzqvh] Jan 29 11:15:37.938: INFO: Reboot successful on node bootstrap-e2e-minion-group-3n8r Jan 29 11:15:37.990: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-90fc metadata-proxy-v0.1-mwf7j] Jan 29 11:15:37.990: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-mwf7j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:15:37.990: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-90fc" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 11:15:38.034: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc": Phase="Running", Reason="", readiness=true. Elapsed: 44.389306ms Jan 29 11:15:38.034: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-90fc" satisfied condition "running and ready, or succeeded" Jan 29 11:15:38.034: INFO: Pod "metadata-proxy-v0.1-mwf7j": Phase="Running", Reason="", readiness=true. Elapsed: 44.566355ms Jan 29 11:15:38.034: INFO: Pod "metadata-proxy-v0.1-mwf7j" satisfied condition "running and ready, or succeeded" Jan 29 11:15:38.034: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-90fc metadata-proxy-v0.1-mwf7j] Jan 29 11:15:38.034: INFO: Reboot successful on node bootstrap-e2e-minion-group-90fc Jan 29 11:15:39.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.092989198s Jan 29 11:15:39.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:41.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.092974001s Jan 29 11:15:41.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:43.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.092329641s Jan 29 11:15:43.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:45.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.092515354s Jan 29 11:15:45.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:47.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.093169579s Jan 29 11:15:47.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:49.127: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.094646636s Jan 29 11:15:49.127: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:51.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.107063578s Jan 29 11:15:51.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:53.154: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.122328235s Jan 29 11:15:53.154: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:55.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.092315116s Jan 29 11:15:55.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:57.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.092661813s Jan 29 11:15:57.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:15:59.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.094120158s Jan 29 11:15:59.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:01.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.092335168s Jan 29 11:16:01.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:03.124: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.09214315s Jan 29 11:16:03.124: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:05.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.092418045s Jan 29 11:16:05.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:07.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.09240444s Jan 29 11:16:07.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:09.126: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.093427033s Jan 29 11:16:09.126: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:11.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.092468538s Jan 29 11:16:11.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:13.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.092740356s Jan 29 11:16:13.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart (Spec Runtime: 5m0.301s) test/e2e/cloud/gcp/reboot.go:103 In [It] (Node Runtime: 5m0s) test/e2e/cloud/gcp/reboot.go:103 Spec Goroutine goroutine 7625 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc001480588?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fd091ede4a0?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fd091ede4a0?, 0xc003a3d880}, {0x8147108?, 0xc003fb44e0}, {0x78b3e17, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.4({0x7fd091ede4a0?, 0xc003a3d880?}) test/e2e/cloud/gcp/reboot.go:106 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc003a3d880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7628 [chan receive, 5 minutes] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x7fd091ede4a0?, 0xc003a3d880}, {0x8147108?, 0xc003fb44e0}, {0x76d190b, 0xb}, {0xc0042cb440, 0x4, 0x4}, 0x45d964b800, ...) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReadyOrSucceeded(...) test/e2e/framework/pod/resource.go:508 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fd091ede4a0, 0xc003a3d880}, {0x8147108, 0xc003fb44e0}, {0x7ffc12df95ee, 0x3}, {0xc00350f020, 0x1f}, {0x78b3e17, 0x7d}) test/e2e/cloud/gcp/reboot.go:284 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 11:16:15.125: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.092823292s Jan 29 11:16:15.125: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:15.167: INFO: Pod "kube-dns-autoscaler-5f6455f985-47h2m": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.13466431s Jan 29 11:16:15.167: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-47h2m' on 'bootstrap-e2e-minion-group-7sd9' to be 'Running' but was 'Pending' Jan 29 11:16:15.167: INFO: Pod kube-dns-autoscaler-5f6455f985-47h2m failed to be running and ready, or succeeded. Jan 29 11:16:15.167: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-dns-autoscaler-5f6455f985-47h2m kube-proxy-bootstrap-e2e-minion-group-7sd9 metadata-proxy-v0.1-ppxd4 volume-snapshot-controller-0] Jan 29 11:16:15.167: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-47h2m: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:59:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 11:00:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP: PodIPs:[] StartTime:2023-01-29 10:57:47 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:&ContainerStateWaiting{Reason:,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:1 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://47de7bf651c6c66b4beb7067f0cd8237151462cd30542dae17a4415076b6cc9c Started:0xc001092e5a}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 29 11:16:15.239: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-47h2m/autoscaler, err: the server rejected our request for an unknown reason (get pods kube-dns-autoscaler-5f6455f985-47h2m): Jan 29 11:16:15.239: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-47h2m/autoscaler, err: the server rejected our request for an unknown reason (get pods kube-dns-autoscaler-5f6455f985-47h2m): Jan 29 11:16:15.239: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 11:09:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 11:09:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 10:57:47 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.3.25 PodIPs:[{IP:10.64.3.25}] StartTime:2023-01-29 10:57:47 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 2m40s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(0b095899-bdc8-4503-9121-614521f752aa),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 11:08:10 +0000 UTC,FinishedAt:2023-01-29 11:09:19 +0000 UTC,ContainerID:containerd://e9be29408cee2e88c60c130c75908084d90300c023f3b114cdfe6e0a06a77312,}} Ready:false RestartCount:7 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://e9be29408cee2e88c60c130c75908084d90300c023f3b114cdfe6e0a06a77312 Started:0xc00109384f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 29 11:16:15.285: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller: I0129 11:12:11.322377 1 main.go:125] Version: v6.1.0 I0129 11:12:11.323414 1 main.go:168] Metrics path successfully registered at /metrics I0129 11:12:11.323555 1 main.go:174] Start NewCSISnapshotController with kubeconfig [] resyncPeriod [15m0s] I0129 11:12:51.650578 1 main.go:224] Metrics http server successfully started on :9102, /metrics I0129 11:12:51.651052 1 reflector.go:221] Starting reflector *v1.VolumeSnapshotContent (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0129 11:12:51.651077 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0129 11:12:51.651402 1 reflector.go:221] Starting reflector *v1.VolumeSnapshotClass (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0129 11:12:51.651547 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0129 11:12:51.651809 1 reflector.go:221] Starting reflector *v1.VolumeSnapshot (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0129 11:12:51.651830 1 reflector.go:257] Listing and watching *v1.VolumeSnapshot from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0129 11:12:51.652220 1 snapshot_controller_base.go:152] Starting snapshot controller I0129 11:12:51.652483 1 reflector.go:221] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:134 I0129 11:12:51.652499 1 reflector.go:257] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134 I0129 11:12:51.752830 1 shared_informer.go:285] caches populated I0129 11:12:51.752876 1 snapshot_controller_base.go:509] controller initialized Jan 29 11:16:15.285: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller: I0129 11:12:11.322377 1 main.go:125] Version: v6.1.0 I0129 11:12:11.323414 1 main.go:168] Metrics path successfully registered at /metrics I0129 11:12:11.323555 1 main.go:174] Start NewCSISnapshotController with kubeconfig [] resyncPeriod [15m0s] I0129 11:12:51.650578 1 main.go:224] Metrics http server successfully started on :9102, /metrics I0129 11:12:51.651052 1 reflector.go:221] Starting reflector *v1.VolumeSnapshotContent (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0129 11:12:51.651077 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0129 11:12:51.651402 1 reflector.go:221] Starting reflector *v1.VolumeSnapshotClass (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0129 11:12:51.651547 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0129 11:12:51.651809 1 reflector.go:221] Starting reflector *v1.VolumeSnapshot (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0129 11:12:51.651830 1 reflector.go:257] Listing and watching *v1.VolumeSnapshot from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0129 11:12:51.652220 1 snapshot_controller_base.go:152] Starting snapshot controller I0129 11:12:51.652483 1 reflector.go:221] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:134 I0129 11:12:51.652499 1 reflector.go:257] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134 I0129 11:12:51.752830 1 shared_informer.go:285] caches populated I0129 11:12:51.752876 1 snapshot_controller_base.go:509] controller initialized Jan 29 11:16:15.285: INFO: Node bootstrap-e2e-minion-group-7sd9 failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 11:16:15.285 < Exit [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/29/23 11:16:15.285 (5m0.496s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 11:16:15.285 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 11:16:15.285 Jan 29 11:16:15.337: INFO: event for coredns-6846b5b5f-85z9q: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-85z9q to bootstrap-e2e-minion-group-7sd9 Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.639797337s (2.639812936s including waiting) Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container coredns Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container coredns Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container coredns Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container coredns Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container coredns Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Unhealthy: Readiness probe failed: Get "http://10.64.3.17:8181/ready": dial tcp 10.64.3.17:8181: connect: connection refused Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container coredns Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-85z9q_kube-system(a8de34c0-3754-4f31-8c5e-d047238243e1) Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:8181/ready": dial tcp 10.64.3.24:8181: connect: connection refused Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-85z9q: {kubelet bootstrap-e2e-minion-group-7sd9} Unhealthy: Liveness probe failed: Get "http://10.64.3.24:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-tbk49 to bootstrap-e2e-minion-group-3n8r Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.071624726s (1.071641644s including waiting) Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container coredns Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container coredns Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container coredns Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container coredns Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Unhealthy: Liveness probe failed: Get "http://10.64.2.4:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": dial tcp 10.64.2.4:8181: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Killing: Container coredns failed liveness probe, will be restarted Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-tbk49 Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-tbk49 Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container coredns Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f-tbk49: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container coredns Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-85z9q Jan 29 11:16:15.338: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-tbk49 Jan 29 11:16:15.338: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 11:16:15.338: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 11:16:15.338: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 11:16:15.338: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 11:16:15.338: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 11:16:15.338: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 11:16:15.338: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 11:16:15.338: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 11:16:15.338: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 11:16:15.338: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 11:16:15.338: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 11:16:15.338: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 29 11:16:15.338: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 11:16:15.338: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_a845b became leader Jan 29 11:16:15.338: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_a15ba became leader Jan 29 11:16:15.338: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_bd5ff became leader Jan 29 11:16:15.338: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_66f84 became leader Jan 29 11:16:15.338: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_733a2 became leader Jan 29 11:16:15.338: INFO: event for konnectivity-agent-b69l8: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-b69l8 to bootstrap-e2e-minion-group-7sd9 Jan 29 11:16:15.338: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 11:16:15.338: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.678277477s (1.67829785s including waiting) Jan 29 11:16:15.338: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-b69l8: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 11:16:15.338: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-b69l8: {kubelet bootstrap-e2e-minion-group-7sd9} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-b69l8_kube-system(fae56098-57a4-4079-a8fc-75f48b84c442) Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-h9nwn to bootstrap-e2e-minion-group-3n8r Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 676.129847ms (676.140226ms including waiting) Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Killing: Stopping container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-h9nwn_kube-system(0ac52dd7-f76d-4f28-9d8a-8af2e2676683) Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Unhealthy: Liveness probe failed: Get "http://10.64.2.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-h9nwn: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-kxtrk to bootstrap-e2e-minion-group-90fc Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 824.620705ms (824.644728ms including waiting) Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Liveness probe failed: Get "http://10.64.1.5:8093/healthz": dial tcp 10.64.1.5:8093: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Unhealthy: Liveness probe failed: Get "http://10.64.1.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent-kxtrk: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container konnectivity-agent Jan 29 11:16:15.338: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-b69l8 Jan 29 11:16:15.338: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-kxtrk Jan 29 11:16:15.338: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-h9nwn Jan 29 11:16:15.338: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 11:16:15.338: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 11:16:15.338: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 11:16:15.338: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://127.0.0.1:8133/healthz": dial tcp 127.0.0.1:8133: connect: connection refused Jan 29 11:16:15.338: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 11:16:15.338: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 11:16:15.338: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 11:16:15.338: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 11:16:15.338: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 11:16:15.338: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 11:16:15.338: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 11:16:15.338: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 11:16:15.338: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 11:16:15.338: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 11:16:15.338: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:16:15.338: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 11:16:15.338: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 11:16:15.338: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 11:16:15.338: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 11:16:15.338: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_0298c03a-3832-4855-a2af-cf203f6d5229 became leader Jan 29 11:16:15.338: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_b64428ec-4368-4776-ac50-8d5ce5d3c3d7 became leader Jan 29 11:16:15.338: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_93420249-344c-40fd-8874-2327496da9f4 became leader Jan 29 11:16:15.338: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_07258058-394f-4fde-9634-ec2cdd7d618d became leader Jan 29 11:16:15.338: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_78340de8-131f-416f-9312-082a0482b7aa became leader Jan 29 11:16:15.338: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 11:16:15.338: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 11:16:15.338: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-47h2m to bootstrap-e2e-minion-group-7sd9 Jan 29 11:16:15.338: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 11:16:15.338: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.618413775s (1.618457503s including waiting) Jan 29 11:16:15.338: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container autoscaler Jan 29 11:16:15.338: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container autoscaler Jan 29 11:16:15.338: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container autoscaler Jan 29 11:16:15.338: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 11:16:15.338: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for kube-dns-autoscaler-5f6455f985-47h2m: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 11:16:15.338: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-47h2m Jan 29 11:16:15.338: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Killing: Stopping container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-3n8r_kube-system(b5176a347e88e1ff4660b164d3f16916) Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Killing: Stopping container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-3n8r_kube-system(b5176a347e88e1ff4660b164d3f16916) Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Created: Created container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Started: Started container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-3n8r: {kubelet bootstrap-e2e-minion-group-3n8r} Killing: Stopping container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} Killing: Stopping container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7sd9: {kubelet bootstrap-e2e-minion-group-7sd9} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7sd9_kube-system(20e39278d9aad8613df3183ed37c4881) Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Killing: Stopping container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-90fc_kube-system(81cae927179b6a5281a90fdaa765ded2) Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Created: Created container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-proxy-bootstrap-e2e-minion-group-90fc: {kubelet bootstrap-e2e-minion-group-90fc} Started: Started container kube-proxy Jan 29 11:16:15.338: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 11:16:15.338: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 11:16:15.338: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 11:16:15.338: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 11:16:15.338: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 11:16:15.338: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_425c93d9-4e38-470f-b4ba-e1a7e536d147 became leader Jan 29 11:16:15.338: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c84835d1-579f-4af3-bbe9-2d8899072690 became leader Jan 29 11:16:15.338: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_00ed0cb9-b982-4f69-9378-8d53a0626551 became leader Jan 29 11:16:15.338: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_3c66f968-f07b-4c3a-8b08-d3d24ec883af became leader Jan 29 11:16:15.338: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_323c779c-76b3-4e92-ab66-cc172e33c203 became leader Jan 29 11:16:15.338: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_932fae63-1339-4a08-bb0e-48323dcdb49d became leader Jan 29 11:16:15.338: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ee680098-c8b4-45fb-b3d1-883390b9eaff became leader Jan 29 11:16:15.338: INFO: event for l7-default-backend-8549d69d99-fqgll: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 11:16:15.338: INFO: event for l7-default-backend-8549d69d99-fqgll: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 11:16:15.338: INFO: event for l7-default-backend-8549d69d99-fqgll: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-fqgll to bootstrap-e2e-minion-group-7sd9 Jan 29 11:16:15.338: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 11:16:15.338: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 660.702719ms (660.716002ms including waiting) Jan 29 11:16:15.338: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container default-http-backend Jan 29 11:16:15.338: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container default-http-backend Jan 29 11:16:15.338: INFO: event for l7-default-backend-8549d69d99-fqgll: {node-controller } NodeNotReady: Node is not ready Jan 29 11:16:15.338: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 11:16:15.338: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 11:16:15.338: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Created: Created container default-http-backend Jan 29 11:16:15.338: INFO: event for l7-default-backend-8549d69d99-fqgll: {kubelet bootstrap-e2e-minion-group-7sd9} Started: Started container default-http-backend Jan 29 11:16:15.338: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-fqgll Jan 29 11:16:15.338: