go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 23:26:15.233from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 23:21:12.892 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 23:21:12.892 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 23:21:12.892 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 23:21:12.892 Jan 28 23:21:12.892: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 23:21:12.894 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 23:21:13.019 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 23:21:13.1 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 23:21:13.181 (289ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 23:21:13.181 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 23:21:13.181 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/28/23 23:21:13.181 Jan 28 23:21:13.276: INFO: Getting bootstrap-e2e-minion-group-5kqh Jan 28 23:21:13.276: INFO: Getting bootstrap-e2e-minion-group-z2p7 Jan 28 23:21:13.276: INFO: Getting bootstrap-e2e-minion-group-v2xx Jan 28 23:21:13.319: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-5kqh condition Ready to be true Jan 28 23:21:13.351: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-z2p7 condition Ready to be true Jan 28 23:21:13.351: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-v2xx condition Ready to be true Jan 28 23:21:13.361: INFO: Node bootstrap-e2e-minion-group-5kqh has 4 assigned pods with no liveness probes: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-94k5n kube-proxy-bootstrap-e2e-minion-group-5kqh metadata-proxy-v0.1-5d8kv] Jan 28 23:21:13.361: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-94k5n kube-proxy-bootstrap-e2e-minion-group-5kqh metadata-proxy-v0.1-5d8kv] Jan 28 23:21:13.361: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5d8kv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:21:13.361: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-94k5n" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:21:13.361: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:21:13.362: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-5kqh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:21:13.394: INFO: Node bootstrap-e2e-minion-group-v2xx has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-cm88n kube-proxy-bootstrap-e2e-minion-group-v2xx] Jan 28 23:21:13.394: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-cm88n kube-proxy-bootstrap-e2e-minion-group-v2xx] Jan 28 23:21:13.394: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-v2xx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:21:13.395: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-cm88n" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:21:13.395: INFO: Node bootstrap-e2e-minion-group-z2p7 has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-lw5t2 kube-proxy-bootstrap-e2e-minion-group-z2p7] Jan 28 23:21:13.395: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-lw5t2 kube-proxy-bootstrap-e2e-minion-group-z2p7] Jan 28 23:21:13.395: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-z2p7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:21:13.395: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-lw5t2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:21:13.405: INFO: Pod "metadata-proxy-v0.1-5d8kv": Phase="Running", Reason="", readiness=true. Elapsed: 43.201798ms Jan 28 23:21:13.405: INFO: Pod "metadata-proxy-v0.1-5d8kv" satisfied condition "running and ready, or succeeded" Jan 28 23:21:13.406: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.337763ms Jan 28 23:21:13.406: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:13.407: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 44.921483ms Jan 28 23:21:13.407: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:13.407: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 45.931318ms Jan 28 23:21:13.407: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:13.439: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-v2xx": Phase="Running", Reason="", readiness=true. Elapsed: 44.814832ms Jan 28 23:21:13.439: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-v2xx" satisfied condition "running and ready, or succeeded" Jan 28 23:21:13.441: INFO: Pod "metadata-proxy-v0.1-lw5t2": Phase="Running", Reason="", readiness=true. Elapsed: 46.174959ms Jan 28 23:21:13.441: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z2p7": Phase="Running", Reason="", readiness=true. Elapsed: 46.267609ms Jan 28 23:21:13.441: INFO: Pod "metadata-proxy-v0.1-lw5t2" satisfied condition "running and ready, or succeeded" Jan 28 23:21:13.441: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z2p7" satisfied condition "running and ready, or succeeded" Jan 28 23:21:13.441: INFO: Pod "metadata-proxy-v0.1-cm88n": Phase="Running", Reason="", readiness=true. Elapsed: 46.705561ms Jan 28 23:21:13.441: INFO: Pod "metadata-proxy-v0.1-cm88n" satisfied condition "running and ready, or succeeded" Jan 28 23:21:13.441: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-lw5t2 kube-proxy-bootstrap-e2e-minion-group-z2p7] Jan 28 23:21:13.441: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-cm88n kube-proxy-bootstrap-e2e-minion-group-v2xx] Jan 28 23:21:13.441: INFO: Getting external IP address for bootstrap-e2e-minion-group-z2p7 Jan 28 23:21:13.441: INFO: Getting external IP address for bootstrap-e2e-minion-group-v2xx Jan 28 23:21:13.441: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-v2xx(34.145.43.141:22) Jan 28 23:21:13.441: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-z2p7(34.168.4.157:22) Jan 28 23:21:13.978: INFO: ssh prow@34.168.4.157:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 23:21:13.978: INFO: ssh prow@34.168.4.157:22: stdout: "" Jan 28 23:21:13.978: INFO: ssh prow@34.168.4.157:22: stderr: "" Jan 28 23:21:13.978: INFO: ssh prow@34.168.4.157:22: exit code: 0 Jan 28 23:21:13.978: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-z2p7 condition Ready to be false Jan 28 23:21:13.990: INFO: ssh prow@34.145.43.141:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 23:21:13.990: INFO: ssh prow@34.145.43.141:22: stdout: "" Jan 28 23:21:13.990: INFO: ssh prow@34.145.43.141:22: stderr: "" Jan 28 23:21:13.990: INFO: ssh prow@34.145.43.141:22: exit code: 0 Jan 28 23:21:13.990: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-v2xx condition Ready to be false Jan 28 23:21:14.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:14.032: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:15.451: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.089890675s Jan 28 23:21:15.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:15.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2.090080817s Jan 28 23:21:15.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:15.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 2.090228572s Jan 28 23:21:15.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:16.065: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:16.075: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:17.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4.08846323s Jan 28 23:21:17.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:17.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 4.089064171s Jan 28 23:21:17.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:17.451: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.089911317s Jan 28 23:21:17.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:18.108: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:18.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:19.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.086888207s Jan 28 23:21:19.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:19.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 6.089145s Jan 28 23:21:19.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 6.090004572s Jan 28 23:21:19.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:19.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:20.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:20.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:21.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.087095879s Jan 28 23:21:21.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:21.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 8.090205186s Jan 28 23:21:21.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:21.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 8.08948459s Jan 28 23:21:21.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:22.194: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:22.207: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:23.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 10.089001261s Jan 28 23:21:23.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:23.452: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.090190122s Jan 28 23:21:23.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:23.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 10.089498278s Jan 28 23:21:23.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:24.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:24.250: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:25.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 12.089183006s Jan 28 23:21:25.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:25.452: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.090398063s Jan 28 23:21:25.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:25.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 12.089698854s Jan 28 23:21:25.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:26.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:26.294: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:27.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 14.089591259s Jan 28 23:21:27.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:27.452: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.090744678s Jan 28 23:21:27.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:27.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 14.089994759s Jan 28 23:21:27.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:28.325: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:28.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:29.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.086450896s Jan 28 23:21:29.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:29.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 16.090349393s Jan 28 23:21:29.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:29.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 16.089585307s Jan 28 23:21:29.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:30.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:30.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:31.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.087922796s Jan 28 23:21:31.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:31.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 18.089613366s Jan 28 23:21:31.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:31.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 18.089868775s Jan 28 23:21:31.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:32.411: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:32.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:33.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.086777933s Jan 28 23:21:33.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:33.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 20.089285584s Jan 28 23:21:33.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:33.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 20.089675449s Jan 28 23:21:33.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:34.454: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:34.466: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:35.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.086308039s Jan 28 23:21:35.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:35.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 22.089395923s Jan 28 23:21:35.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:35.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 22.090362952s Jan 28 23:21:35.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:36.512: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:36.525: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:37.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.08673948s Jan 28 23:21:37.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:37.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 24.09001795s Jan 28 23:21:37.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:37.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 24.089306747s Jan 28 23:21:37.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:38.556: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:38.567: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:39.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 26.088957137s Jan 28 23:21:39.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:39.452: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.090072225s Jan 28 23:21:39.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:39.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 26.089328486s Jan 28 23:21:39.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:40.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:40.610: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:41.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.086661836s Jan 28 23:21:41.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:41.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 28.090198285s Jan 28 23:21:41.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:41.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 28.089482797s Jan 28 23:21:41.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:42.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:42.654: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:43.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.087129625s Jan 28 23:21:43.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:43.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 30.08904718s Jan 28 23:21:43.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 30.089907496s Jan 28 23:21:43.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:43.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:44.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:44.697: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:45.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.086349379s Jan 28 23:21:45.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:45.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 32.089653392s Jan 28 23:21:45.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 32.090502779s Jan 28 23:21:45.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:45.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:46.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:46.739: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:47.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 34.089373824s Jan 28 23:21:47.451: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.089360717s Jan 28 23:21:47.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:47.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:47.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 34.08999164s Jan 28 23:21:47.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:48.773: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:48.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:49.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.086818615s Jan 28 23:21:49.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:49.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 36.088729221s Jan 28 23:21:49.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 36.089561928s Jan 28 23:21:49.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:49.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:50.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:50.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:51.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.086352546s Jan 28 23:21:51.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:51.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 38.090261787s Jan 28 23:21:51.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 38.08942935s Jan 28 23:21:51.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:51.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:52.861: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:52.869: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:53.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 40.089803768s Jan 28 23:21:53.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:53.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 40.090240295s Jan 28 23:21:53.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:53.453: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.091134889s Jan 28 23:21:53.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:54.929: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:54.929: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:55.447: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.086024713s Jan 28 23:21:55.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:55.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 42.088723333s Jan 28 23:21:55.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 42.08955862s Jan 28 23:21:55.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:55.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:56.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:56.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:57.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.08615709s Jan 28 23:21:57.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:57.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 44.089006243s Jan 28 23:21:57.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:57.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 44.089179134s Jan 28 23:21:57.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:59.018: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-v2xx condition Ready to be true Jan 28 23:21:59.018: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:59.060: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 23:21:59.447: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.08598007s Jan 28 23:21:59.447: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:59.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 46.089385375s Jan 28 23:21:59.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 46.088518561s Jan 28 23:21:59.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:59.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:01.062: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:22:01.103: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 23:22:01.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.086620614s Jan 28 23:22:01.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:01.453: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 48.09157134s Jan 28 23:22:01.453: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 48.090766618s Jan 28 23:22:01.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:01.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:03.105: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:22:03.147: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 23:22:03.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.087120679s Jan 28 23:22:03.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:03.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 50.089763161s Jan 28 23:22:03.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:03.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 50.089962586s Jan 28 23:22:03.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:05.148: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-z2p7 condition Ready to be true Jan 28 23:22:05.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:05.191: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 23:22:05.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.087865496s Jan 28 23:22:05.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:05.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 52.089431952s Jan 28 23:22:05.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:05.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 52.089868516s Jan 28 23:22:05.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:07.231: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:07.234: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 23:22:07.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.086344418s Jan 28 23:22:07.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:07.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 54.089091794s Jan 28 23:22:07.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:07.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 54.089382116s Jan 28 23:22:07.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:09.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:09.276: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 23:22:09.447: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.08605218s Jan 28 23:22:09.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:09.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 56.088839163s Jan 28 23:22:09.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:09.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 56.088967445s Jan 28 23:22:09.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:11.320: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 23:22:11.320: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:11.450: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.088112295s Jan 28 23:22:11.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:11.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 58.089611622s Jan 28 23:22:11.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 58.090460128s Jan 28 23:22:11.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:11.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:13.366: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 23:22:13.366: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:13.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.086627503s Jan 28 23:22:13.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:13.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.089323875s Jan 28 23:22:13.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:13.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.089774044s Jan 28 23:22:13.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:15.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:15.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:15.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.086979732s Jan 28 23:22:15.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:15.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.089089425s Jan 28 23:22:15.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.089923543s Jan 28 23:22:15.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:15.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:17.451: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.090060231s Jan 28 23:22:17.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:17.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.089368094s Jan 28 23:22:17.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:17.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.090342493s Jan 28 23:22:17.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:17.456: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:17.456: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:19.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.089276915s Jan 28 23:22:19.452: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.090090727s Jan 28 23:22:19.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:19.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:19.453: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.091161213s Jan 28 23:22:19.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:19.500: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:19.500: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:21.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.086275729s Jan 28 23:22:21.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:21.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.090380986s Jan 28 23:22:21.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:21.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.089624808s Jan 28 23:22:21.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:21.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:21.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:23.462: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.100337802s Jan 28 23:22:23.462: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:23.463: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.101243476s Jan 28 23:22:23.463: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:23.464: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.101693068s Jan 28 23:22:23.464: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:23.588: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:23.588: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:25.462: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.100571145s Jan 28 23:22:25.462: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:25.463: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.102034687s Jan 28 23:22:25.463: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.101207251s Jan 28 23:22:25.463: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:25.463: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:25.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:25.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:27.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.086798982s Jan 28 23:22:27.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:27.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.088289668s Jan 28 23:22:27.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:27.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.088772084s Jan 28 23:22:27.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:27.681: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:27.681: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:29.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.086575277s Jan 28 23:22:29.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:29.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.09040024s Jan 28 23:22:29.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.089567489s Jan 28 23:22:29.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:29.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:29.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:29.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:31.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.086452139s Jan 28 23:22:31.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:31.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.090090436s Jan 28 23:22:31.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:31.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.089315512s Jan 28 23:22:31.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:31.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:31.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:33.451: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.089602844s Jan 28 23:22:33.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:33.453: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.091868007s Jan 28 23:22:33.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:33.454: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.092084885s Jan 28 23:22:33.454: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:33.816: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:33.816: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:35.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.086986928s Jan 28 23:22:35.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:35.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.088850984s Jan 28 23:22:35.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.089700433s Jan 28 23:22:35.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:35.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:35.862: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:35.862: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:37.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.089954435s Jan 28 23:22:37.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:37.453: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.090356074s Jan 28 23:22:37.453: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.091177124s Jan 28 23:22:37.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:37.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:37.908: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:37.908: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:39.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.086839194s Jan 28 23:22:39.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:39.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.090554053s Jan 28 23:22:39.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.089749358s Jan 28 23:22:39.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:39.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:39.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:39.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:41.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.086998321s Jan 28 23:22:41.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:41.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.088979652s Jan 28 23:22:41.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.08982781s Jan 28 23:22:41.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:41.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:41.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:41.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:43.451: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.089086462s Jan 28 23:22:43.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:43.453: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.091396627s Jan 28 23:22:43.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:43.453: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.09072162s Jan 28 23:22:43.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:44.044: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:44.044: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:45.474: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.112363086s Jan 28 23:22:45.474: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:45.474: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.112549742s Jan 28 23:22:45.474: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.111755935s Jan 28 23:22:45.474: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:45.474: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:46.089: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:46.089: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:47.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.089157992s Jan 28 23:22:47.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:47.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.089630928s Jan 28 23:22:47.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:47.452: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.090470803s Jan 28 23:22:47.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:48.137: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:48.137: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:49.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.086340457s Jan 28 23:22:49.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:49.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.089049523s Jan 28 23:22:49.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:49.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.089509142s Jan 28 23:22:49.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:50.182: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:50.182: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:51.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.086968391s Jan 28 23:22:51.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:51.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.089496243s Jan 28 23:22:51.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:51.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.088734125s Jan 28 23:22:51.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:52.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:52.240: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:53.447: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.085884085s Jan 28 23:22:53.447: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:53.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.089621544s Jan 28 23:22:53.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.088803776s Jan 28 23:22:53.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:53.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:54.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:54.284: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:55.450: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.088359413s Jan 28 23:22:55.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:55.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.090364044s Jan 28 23:22:55.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:55.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.089682502s Jan 28 23:22:55.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:56.328: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:56.328: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:57.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.0865798s Jan 28 23:22:57.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:57.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.09047317s Jan 28 23:22:57.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:57.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.089708558s Jan 28 23:22:57.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:58.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:58.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:59.447: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.085847265s Jan 28 23:22:59.447: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:59.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.089660039s Jan 28 23:22:59.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:59.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.088927627s Jan 28 23:22:59.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:23:00.418: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:00.418: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:01.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.086428556s Jan 28 23:23:01.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:01.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.08922223s Jan 28 23:23:01.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:23:01.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.090104058s Jan 28 23:23:01.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:02.463: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:02.463: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:03.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.087982071s Jan 28 23:23:03.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:03.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.089525032s Jan 28 23:23:03.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:03.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=true. Elapsed: 1m50.09000226s Jan 28 23:23:03.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh" satisfied condition "running and ready, or succeeded" Jan 28 23:23:04.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:04.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:05.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.087072083s Jan 28 23:23:05.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:05.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.088764804s Jan 28 23:23:05.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:06.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:06.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:07.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.087892975s Jan 28 23:23:07.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:07.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.088995073s Jan 28 23:23:07.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:08.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:08.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:09.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.086132896s Jan 28 23:23:09.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:09.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.087626218s Jan 28 23:23:09.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:10.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:10.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:11.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.08627741s Jan 28 23:23:11.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:11.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.088154915s Jan 28 23:23:11.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:12.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:12.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:13.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.089412341s Jan 28 23:23:13.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:13.452: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.090577186s Jan 28 23:23:13.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:14.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:14.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:15.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.086410107s Jan 28 23:23:15.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:15.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.088146766s Jan 28 23:23:15.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:16.785: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:16.785: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:17.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.086532367s Jan 28 23:23:17.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:17.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.089751446s Jan 28 23:23:17.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:18.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:18.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:19.450: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.088488222s Jan 28 23:23:19.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:19.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.088623695s Jan 28 23:23:19.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:20.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:20.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:21.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.087267986s Jan 28 23:23:21.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:21.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.088252243s Jan 28 23:23:21.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:22.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:22.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:23.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.089257968s Jan 28 23:23:23.451: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.089240217s Jan 28 23:23:23.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:23.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:24.965: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:24.965: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:25.507: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.145325846s Jan 28 23:23:25.507: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:25.507: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.145480523s Jan 28 23:23:25.507: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:27.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:27.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:27.456: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.094838094s Jan 28 23:23:27.456: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:27.456: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.095001814s Jan 28 23:23:27.456: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:29.054: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-lw5t2 kube-proxy-bootstrap-e2e-minion-group-z2p7] Jan 28 23:23:29.054: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-z2p7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:23:29.054: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-cm88n kube-proxy-bootstrap-e2e-minion-group-v2xx] Jan 28 23:23:29.054: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-v2xx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:23:29.054: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-cm88n" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:23:29.054: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-lw5t2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:23:29.101: INFO: Pod "metadata-proxy-v0.1-cm88n": Phase="Running", Reason="", readiness=true. Elapsed: 46.532709ms Jan 28 23:23:29.101: INFO: Pod "metadata-proxy-v0.1-cm88n" satisfied condition "running and ready, or succeeded" Jan 28 23:23:29.101: INFO: Pod "metadata-proxy-v0.1-lw5t2": Phase="Running", Reason="", readiness=true. Elapsed: 46.600364ms Jan 28 23:23:29.101: INFO: Pod "metadata-proxy-v0.1-lw5t2" satisfied condition "running and ready, or succeeded" Jan 28 23:23:29.101: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z2p7": Phase="Running", Reason="", readiness=true. Elapsed: 46.868484ms Jan 28 23:23:29.101: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z2p7" satisfied condition "running and ready, or succeeded" Jan 28 23:23:29.101: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-lw5t2 kube-proxy-bootstrap-e2e-minion-group-z2p7] Jan 28 23:23:29.101: INFO: Reboot successful on node bootstrap-e2e-minion-group-z2p7 Jan 28 23:23:29.101: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-v2xx": Phase="Running", Reason="", readiness=true. Elapsed: 46.916985ms Jan 28 23:23:29.101: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-v2xx" satisfied condition "running and ready, or succeeded" Jan 28 23:23:29.101: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-cm88n kube-proxy-bootstrap-e2e-minion-group-v2xx] Jan 28 23:23:29.101: INFO: Reboot successful on node bootstrap-e2e-minion-group-v2xx Jan 28 23:23:29.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.08682045s Jan 28 23:23:29.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:29.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.088365469s Jan 28 23:23:29.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:31.447: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.085808088s Jan 28 23:23:31.447: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:31.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.087813639s Jan 28 23:23:31.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:33.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.086276815s Jan 28 23:23:33.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:33.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.088060214s Jan 28 23:23:33.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:35.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.086257004s Jan 28 23:23:35.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:35.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.087839107s Jan 28 23:23:35.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:37.452: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.090397607s Jan 28 23:23:37.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:37.453: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.091754497s Jan 28 23:23:37.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:39.447: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.08604726s Jan 28 23:23:39.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:39.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.087616709s Jan 28 23:23:39.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:41.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.086253338s Jan 28 23:23:41.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:41.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.088141774s Jan 28 23:23:41.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:24:43.449: INFO: Retryable error while getting pod kube-system/kube-dns-autoscaler-5f6455f985-94k5n, retrying after 0s: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-dns-autoscaler-5f6455f985-94k5n) Jan 28 23:24:43.452: INFO: Encountered non-retryable error while getting pod kube-system/volume-snapshot-controller-0: Get "https://34.83.136.180/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0": stream error: stream ID 2305; INTERNAL_ERROR; received from peer Jan 28 23:24:43.452: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 28 23:25:32.937: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m19.576030132s Jan 28 23:25:32.938: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:33.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.088873796s Jan 28 23:25:33.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:35.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.087850987s Jan 28 23:25:35.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:37.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.087990658s Jan 28 23:25:37.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:39.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.088143192s Jan 28 23:25:39.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:41.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.087830369s Jan 28 23:25:41.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:43.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.089973866s Jan 28 23:25:43.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:45.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.08857565s Jan 28 23:25:45.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:47.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.088048058s Jan 28 23:25:47.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:49.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.088087553s Jan 28 23:25:49.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:51.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.087995979s Jan 28 23:25:51.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:53.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.088019999s Jan 28 23:25:53.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:55.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.088316884s Jan 28 23:25:55.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:57.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.088536466s Jan 28 23:25:57.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:59.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.089193218s Jan 28 23:25:59.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:26:01.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.088172396s Jan 28 23:26:01.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:26:03.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.088203259s Jan 28 23:26:03.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:26:05.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.088316049s Jan 28 23:26:05.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:26:07.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.087260597s Jan 28 23:26:07.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:26:09.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.088797038s Jan 28 23:26:09.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:26:11.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.08804626s Jan 28 23:26:11.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards (Spec Runtime: 5m0.29s) test/e2e/cloud/gcp/reboot.go:136 In [It] (Node Runtime: 5m0.001s) test/e2e/cloud/gcp/reboot.go:136 Spec Goroutine goroutine 8090 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc003966558?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f4ef8098238?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f4ef8098238?, 0xc000e69f40}, {0x8147108?, 0xc00405c4e0}, {0xc003fe21a0, 0x182}, 0xc003936e10) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.7({0x7f4ef8098238, 0xc000e69f40}) test/e2e/cloud/gcp/reboot.go:141 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc000e69f40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 8092 [chan receive, 2 minutes] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x7f4ef8098238?, 0xc000e69f40}, {0x8147108?, 0xc00405c4e0}, {0x76d190b, 0xb}, {0xc004fcb780, 0x4, 0x4}, 0x45d964b800, ...) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReadyOrSucceeded(...) test/e2e/framework/pod/resource.go:508 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f4ef8098238, 0xc000e69f40}, {0x8147108, 0xc00405c4e0}, {0x7ffd2e7ee5ee, 0x3}, {0xc003924780, 0x1f}, {0xc003fe21a0, 0x182}) test/e2e/cloud/gcp/reboot.go:284 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 23:26:13.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.088751456s Jan 28 23:26:13.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:26:13.492: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.130172191s Jan 28 23:26:13.492: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:26:13.492: INFO: Pod kube-dns-autoscaler-5f6455f985-94k5n failed to be running and ready, or succeeded. Jan 28 23:26:13.492: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-94k5n kube-proxy-bootstrap-e2e-minion-group-5kqh metadata-proxy-v0.1-5d8kv] Jan 28 23:26:13.492: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:19:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:19:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:40 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.3.39 PodIPs:[{IP:10.64.3.39}] StartTime:2023-01-28 22:54:40 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 5m0s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(5e06e33a-3aff-4f65-9b6b-f080476a8d59),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 23:18:16 +0000 UTC,FinishedAt:2023-01-28 23:19:18 +0000 UTC,ContainerID:containerd://868e2a0beaba251677d7fb52467c5526086099e08cb5aeb6814d885933c8508e,}} Ready:false RestartCount:11 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://868e2a0beaba251677d7fb52467c5526086099e08cb5aeb6814d885933c8508e Started:0xc00590abdf}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 28 23:26:13.551: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller: I0128 23:24:30.547274 1 main.go:125] Version: v6.1.0 I0128 23:24:30.550276 1 main.go:168] Metrics path successfully registered at /metrics I0128 23:24:30.550599 1 main.go:174] Start NewCSISnapshotController with kubeconfig [] resyncPeriod [15m0s] I0128 23:25:33.455640 1 main.go:224] Metrics http server successfully started on :9102, /metrics I0128 23:25:33.456108 1 reflector.go:221] Starting reflector *v1.VolumeSnapshotContent (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.456188 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.456374 1 reflector.go:221] Starting reflector *v1.VolumeSnapshotClass (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.456392 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.456778 1 reflector.go:221] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:134 I0128 23:25:33.456799 1 reflector.go:257] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134 I0128 23:25:33.456977 1 reflector.go:221] Starting reflector *v1.VolumeSnapshot (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.457037 1 reflector.go:257] Listing and watching *v1.VolumeSnapshot from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.457283 1 snapshot_controller_base.go:152] Starting snapshot controller I0128 23:25:33.557432 1 shared_informer.go:285] caches populated I0128 23:25:33.557478 1 snapshot_controller_base.go:509] controller initialized Jan 28 23:26:13.551: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller: I0128 23:24:30.547274 1 main.go:125] Version: v6.1.0 I0128 23:24:30.550276 1 main.go:168] Metrics path successfully registered at /metrics I0128 23:24:30.550599 1 main.go:174] Start NewCSISnapshotController with kubeconfig [] resyncPeriod [15m0s] I0128 23:25:33.455640 1 main.go:224] Metrics http server successfully started on :9102, /metrics I0128 23:25:33.456108 1 reflector.go:221] Starting reflector *v1.VolumeSnapshotContent (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.456188 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.456374 1 reflector.go:221] Starting reflector *v1.VolumeSnapshotClass (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.456392 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.456778 1 reflector.go:221] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:134 I0128 23:25:33.456799 1 reflector.go:257] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134 I0128 23:25:33.456977 1 reflector.go:221] Starting reflector *v1.VolumeSnapshot (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.457037 1 reflector.go:257] Listing and watching *v1.VolumeSnapshot from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.457283 1 snapshot_controller_base.go:152] Starting snapshot controller I0128 23:25:33.557432 1 shared_informer.go:285] caches populated I0128 23:25:33.557478 1 snapshot_controller_base.go:509] controller initialized Jan 28 23:26:13.551: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-94k5n: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:01:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:02:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:40 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP: PodIPs:[] StartTime:2023-01-28 22:54:40 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-28 23:00:44 +0000 UTC,FinishedAt:2023-01-28 23:01:16 +0000 UTC,ContainerID:containerd://6610b36ea376572aa9045552b2a3a3cde3a29846696ca9838eb92776847eed45,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:5 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://6610b36ea376572aa9045552b2a3a3cde3a29846696ca9838eb92776847eed45 Started:0xc00590a1e7}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 23:26:13.595: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-94k5n/autoscaler: Jan 28 23:26:13.595: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-94k5n/autoscaler: Jan 28 23:26:13.596: INFO: Status for not ready pod kube-system/kube-proxy-bootstrap-e2e-minion-group-5kqh: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:24 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:20:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:20:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:24 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.138.0.4 PodIPs:[{IP:10.138.0.4}] StartTime:2023-01-28 22:54:24 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-bootstrap-e2e-minion-group-5kqh_kube-system(64d3f4571520730431db78be9372bf75),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 23:16:35 +0000 UTC,FinishedAt:2023-01-28 23:20:11 +0000 UTC,ContainerID:containerd://0384496db30ce5af9fa5a8a09c892b80b367379e8589f5d3aaef58846eeb9301,}} Ready:false RestartCount:7 Image:registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2 ImageID:sha256:ef97fd17575d534d8bc2960bbf1e744379f3ac6e86b9b97974e086f1516b75e5 ContainerID:containerd://0384496db30ce5af9fa5a8a09c892b80b367379e8589f5d3aaef58846eeb9301 Started:0xc00590a46f}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 23:26:13.654: INFO: Retrieving log for container kube-system/kube-proxy-bootstrap-e2e-minion-group-5kqh/kube-proxy: Jan 28 23:26:13.654: INFO: Retrieving log for the last terminated container kube-system/kube-proxy-bootstrap-e2e-minion-group-5kqh/kube-proxy: Jan 28 23:26:13.654: INFO: Node bootstrap-e2e-minion-group-5kqh failed reboot test. Jan 28 23:26:13.654: INFO: Executing termination hook on nodes Jan 28 23:26:13.654: INFO: Getting external IP address for bootstrap-e2e-minion-group-5kqh Jan 28 23:26:13.654: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-5kqh(34.168.200.47:22) Jan 28 23:26:14.179: INFO: ssh prow@34.168.200.47:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 23:26:14.179: INFO: ssh prow@34.168.200.47:22: stdout: "" Jan 28 23:26:14.179: INFO: ssh prow@34.168.200.47:22: stderr: "cat: /tmp/drop-inbound.log: No such file or directory\n" Jan 28 23:26:14.179: INFO: ssh prow@34.168.200.47:22: exit code: 1 Jan 28 23:26:14.179: INFO: Error while issuing ssh command: failed running "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log": <nil> (exit code 1, stderr cat: /tmp/drop-inbound.log: No such file or directory ) Jan 28 23:26:14.179: INFO: Getting external IP address for bootstrap-e2e-minion-group-v2xx Jan 28 23:26:14.179: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-v2xx(34.145.43.141:22) Jan 28 23:26:14.704: INFO: ssh prow@34.145.43.141:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 23:26:14.704: INFO: ssh prow@34.145.43.141:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 23:21:23 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 23:26:14.704: INFO: ssh prow@34.145.43.141:22: stderr: "" Jan 28 23:26:14.704: INFO: ssh prow@34.145.43.141:22: exit code: 0 Jan 28 23:26:14.704: INFO: Getting external IP address for bootstrap-e2e-minion-group-z2p7 Jan 28 23:26:14.704: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-z2p7(34.168.4.157:22) Jan 28 23:26:15.233: INFO: ssh prow@34.168.4.157:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 23:26:15.233: INFO: ssh prow@34.168.4.157:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 23:21:23 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 23:26:15.233: INFO: ssh prow@34.168.4.157:22: stderr: "" Jan 28 23:26:15.233: INFO: ssh prow@34.168.4.157:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 23:26:15.233 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/28/23 23:26:15.233 (5m2.052s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 23:26:15.233 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 23:26:15.234 Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-gmtb4 to bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.949886764s (3.949897316s including waiting) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: Get "http://10.64.3.5:8181/ready": dial tcp 10.64.3.5:8181: connect: connection refused Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: Get "http://10.64.3.19:8181/ready": dial tcp 10.64.3.19:8181: connect: connection refused Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: Get "http://10.64.3.25:8181/ready": dial tcp 10.64.3.25:8181: connect: connection refused Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-gmtb4_kube-system(48008db0-bd58-4d0b-9f0f-1a30f9ae1eed) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: Get "http://10.64.3.28:8181/ready": dial tcp 10.64.3.28:8181: connect: connection refused Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-m4glj to bootstrap-e2e-minion-group-v2xx Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 971.107113ms (971.12427ms including waiting) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Killing: Stopping container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Unhealthy: Readiness probe failed: Get "http://10.64.0.3:8181/ready": dial tcp 10.64.0.3:8181: connect: connection refused Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Unhealthy: Liveness probe failed: Get "http://10.64.0.8:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Killing: Stopping container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-m4glj_kube-system(48c280c5-14bc-438a-86fa-1f138734ffe4) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Unhealthy: Readiness probe failed: Get "http://10.64.0.9:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-m4glj Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Unhealthy: Readiness probe failed: Get "http://10.64.0.12:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Unhealthy: Liveness probe failed: Get "http://10.64.0.12:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Killing: Container coredns failed liveness probe, will be restarted Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Killing: Stopping container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-gmtb4 Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-m4glj Jan 28 23:26:15.308: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 23:26:15.308: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 23:26:15.308: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 23:26:15.308: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 23:26:15.308: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 23:26:15.308: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 28 23:26:15.308: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 28 23:26:15.308: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 28 23:26:15.308: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 23:26:15.308: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 23:26:15.308: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 23:26:15.308: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 28 23:26:15.308: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 28 23:26:15.308: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_4ce5d became leader Jan 28 23:26:15.308: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_a401b became leader Jan 28 23:26:15.308: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_85a06 became leader Jan 28 23:26:15.308: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_11417 became leader Jan 28 23:26:15.308: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_66efa became leader Jan 28 23:26:15.308: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_f2767 became leader Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-btst9 to bootstrap-e2e-minion-group-v2xx Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 603.127236ms (603.144594ms including waiting) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Killing: Stopping container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-btst9_kube-system(6650f946-87f1-464b-b8b7-08392ca3dbab) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-h2g89 to bootstrap-e2e-minion-group-z2p7 Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 666.826587ms (666.837294ms including waiting) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-h2g89_kube-system(f9bf502e-a58e-40db-b5b6-dfa14e5b7875) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "http://10.64.1.11:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-h2g89_kube-system(f9bf502e-a58e-40db-b5b6-dfa14e5b7875) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "http://10.64.1.19:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-jk72b to bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.919998904s (2.92000884s including waiting) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Liveness probe failed: Get "http://10.64.3.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-jk72b_kube-system(eacd1411-5c92-4ce8-bc32-8a79a0a0aac6) Jan 28 23:26:15.308: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-jk72b Jan 28 23:26:15.308: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-h2g89 Jan 28 23:26:15.308: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-btst9 Jan 28 23:26:15.308: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 23:26:15.308: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 23:26:15.308: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 28 23:26:15.308: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 28 23:26:15.308: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 28 23:26:15.308: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 28 23:26:15.308: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 28 23:26:15.308: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 28 23:26:15.308: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 28 23:26:15.308: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 28 23:26:15.308: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 23:26:15.308: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 23:26:15.308: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 28 23:26:15.308: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 23:26:15.308: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.308: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 28 23:26:15.308: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 28 23:26:15.308: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 28 23:26:15.308: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 28 23:26:15.308: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c7f3864a-79f1-4243-a016-abad9defaf85 became leader Jan 28 23:26:15.308: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_1607b7ec-e6bf-44d1-a209-56dc258333fe became leader Jan 28 23:26:15.308: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_cc0c1448-463a-48d0-91ef-9220541eaa8a became leader Jan 28 23:26:15.308: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_fbb70045-05fa-4f1e-93de-99c62df7bfea became leader Jan 28 23:26:15.308: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_f6be65e3-5140-4eda-a1cd-e7225bb4436d became leader Jan 28 23:26:15.308: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_174c2f28-14d6-48ac-bbc1-8a61bda0e4b7 became leader Jan 28 23:26:15.308: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_b77e8b6e-3335-4483-9b2d-499e7708a013 became leader Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-94k5n to bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.596877064s (1.596888989s including waiting) Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container autoscaler Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container autoscaler Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container autoscaler Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-94k5n_kube-system(a31058f2-55a7-4b22-9fb1-c421767f594c) Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container autoscaler Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container autoscaler Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container autoscaler Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-94k5n_kube-system(a31058f2-55a7-4b22-9fb1-c421767f594c) Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-94k5n Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-5kqh_kube-system(64d3f4571520730431db78be9372bf75) Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Killing: Stopping container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-v2xx_kube-system(bb9deafc2cbae25454444f8cda5500ca) Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-z2p7_kube-system(e9c46e782bd92592f44f3dd337e30259) Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-z2p7_kube-system(e9c46e782bd92592f44f3dd337e30259) Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-z2p7_kube-system(e9c46e782bd92592f44f3dd337e30259) Jan 28 23:26:15.309: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.309: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 28 23:26:15.309: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 28 23:26:15.309: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 28 23:26:15.309: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 28 23:26:15.309: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_d08dc38d-6be6-4c10-9977-2e55c0f9654d became leader Jan 28 23:26:15.309: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_cd913c2f-e98e-43bb-98bc-df89dce0f7ee became leader Jan 28 23:26:15.309: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_b0c84497-1313-4390-b088-a16ae1e38e6c became leader Jan 28 23:26:15.309: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_78ae129e-0bc0-4959-bc28-a178c74018d1 became leader Jan 28 23:26:15.309: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_1c5f4221-88c4-4a3c-ab4b-7604fe80d908 became leader Jan 28 23:26:15.309: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_d0306157-7329-4b46-9892-d9cf18347643 became leader Jan 28 23:26:15.309: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_4b85754e-49bd-4e12-baec-261b6dcc046c became leader Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-wjzcg to bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 2.356180827s (2.356196114s including waiting) Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container default-http-backend Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container default-http-backend Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container default-http-backend Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container default-http-backend Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container default-http-backend Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container default-http-backend Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-wjzcg Jan 28 23:26:15.309: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 28 23:26:15.309: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 28 23:26:15.309: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 28 23:26:15.309: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 28 23:26:15.309: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 28 23:26:15.309: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 28 23:26:15.309: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-2mtlx to bootstrap-e2e-master Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 788.057866ms (788.066097ms including waiting) Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.184224787s (2.184232084s including waiting) Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-5d8kv to bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 754.439073ms (754.451345ms including waiting) Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.98959934s (1.989628324s including waiting) Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-cm88n to bootstrap-e2e-minion-group-v2xx Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 779.269471ms (779.280127ms including waiting) Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.914880811s (1.914910128s including waiting) Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-lw5t2 to bootstrap-e2e-minion-group-z2p7 Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 733.778377ms (733.800063ms including waiting) Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.935483668s (1.935498891s including waiting) Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-cm88n Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-lw5t2 Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-2mtlx Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-5d8kv Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-x75mm to bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 4.05701627s (4.057042853s including waiting) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.333143952s (1.33319741s including waiting) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-x75mm Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-x75mm Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-v2r9c to bootstrap-e2e-minion-group-z2p7 Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.716154798s (1.716184984s including waiting) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.026232391s (1.026241999s including waiting) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "https://10.64.1.5:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "https://10.64.1.7:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.7:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.7:10250/readyz": dial tcp 10.64.1.7:10250: connect: connection refused Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-v2r9c_kube-system(b8856956-45a3-4c9e-a3fd-2359271a8fba) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-v2r9c_kube-system(b8856956-45a3-4c9e-a3fd-2359271a8fba) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.10:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-v2r9c Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.18:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "https://10.64.1.18:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-v2r9c_kube-system(b8856956-45a3-4c9e-a3fd-2359271a8fba) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.20:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-v2r9c Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.150769479s (2.150800929s including waiting) Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(5e06e33a-3aff-4f65-9b6b-f080476a8d59) Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(5e06e33a-3aff-4f65-9b6b-f080476a8d59) Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(5e06e33a-3aff-4f65-9b6b-f080476a8d59) Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 23:26:15.309 (76ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 23:26:15.309 Jan 28 23:26:15.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 23:26:15.354 (45ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 23:26:15.354 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 23:26:15.355 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 23:26:15.355 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 23:26:15.355 STEP: Collecting events from namespace "reboot-6844". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 23:26:15.355 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/28/23 23:26:15.399 Jan 28 23:26:15.440: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 23:26:15.440: INFO: Jan 28 23:26:15.483: INFO: Logging node info for node bootstrap-e2e-master Jan 28 23:26:15.527: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 4cdfb6e7-727d-421b-a4d7-efbd5562b935 3902 0 2023-01-28 22:54:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 22:54:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-28 22:54:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-28 22:54:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 23:25:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-12/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 22:54:40 +0000 UTC,LastTransitionTime:2023-01-28 22:54:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 23:25:48 +0000 UTC,LastTransitionTime:2023-01-28 22:54:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 23:25:48 +0000 UTC,LastTransitionTime:2023-01-28 22:54:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 23:25:48 +0000 UTC,LastTransitionTime:2023-01-28 22:54:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 23:25:48 +0000 UTC,LastTransitionTime:2023-01-28 22:54:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.136.180,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-12.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-12.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d617d4ca8a44383986c203bfbf0066d1,SystemUUID:d617d4ca-8a44-3839-86c2-03bfbf0066d1,BootID:8dbe1fa7-5a18-43b3-9fa4-081b8c329dab,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 23:26:15.527: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 28 23:26:15.575: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 28 23:26:15.640: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:15.640: INFO: Container konnectivity-server-container ready: true, restart count 2 Jan 28 23:26:15.640: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:15.640: INFO: Container kube-controller-manager ready: true, restart count 7 Jan 28 23:26:15.640: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-28 22:53:55 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:15.640: INFO: Container l7-lb-controller ready: true, restart count 8 Jan 28 23:26:15.640: INFO: metadata-proxy-v0.1-2mtlx started at 2023-01-28 22:54:23 +0000 UTC (0+2 container statuses recorded) Jan 28 23:26:15.640: INFO: Container metadata-proxy ready: true, restart count 0 Jan 28 23:26:15.640: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 28 23:26:15.640: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:15.640: INFO: Container etcd-container ready: true, restart count 3 Jan 28 23:26:15.640: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:15.640: INFO: Container etcd-container ready: true, restart count 6 Jan 28 23:26:15.640: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:15.640: INFO: Container kube-apiserver ready: true, restart count 1 Jan 28 23:26:15.640: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:15.640: INFO: Container kube-scheduler ready: false, restart count 6 Jan 28 23:26:15.640: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-28 22:53:55 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:15.640: INFO: Container kube-addon-manager ready: true, restart count 4 Jan 28 23:26:15.815: INFO: Latency metrics for node bootstrap-e2e-master Jan 28 23:26:15.815: INFO: Logging node info for node bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.869: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-5kqh 22141237-0160-4034-9e28-ae02d88cb4ba 3803 0 2023-01-28 22:54:24 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-5kqh kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 22:54:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 23:01:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 23:02:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-28 23:22:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-28 23:23:09 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-12/us-west1-b/bootstrap-e2e-minion-group-5kqh,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 23:22:46 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 23:22:46 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 23:22:46 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 23:22:46 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 23:22:46 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 23:22:46 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 23:22:46 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 22:54:40 +0000 UTC,LastTransitionTime:2023-01-28 22:54:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:09 +0000 UTC,LastTransitionTime:2023-01-28 23:02:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:09 +0000 UTC,LastTransitionTime:2023-01-28 23:02:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:09 +0000 UTC,LastTransitionTime:2023-01-28 23:02:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 23:23:09 +0000 UTC,LastTransitionTime:2023-01-28 23:02:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.200.47,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-5kqh.c.k8s-boskos-gce-project-12.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-5kqh.c.k8s-boskos-gce-project-12.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9b0656f82e87ede044202cdcb6f45e0d,SystemUUID:9b0656f8-2e87-ede0-4420-2cdcb6f45e0d,BootID:54027c00-e043-4e49-be7c-dd07f5d46486,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 23:26:15.869: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.926: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-5kqh Jan 28 23:26:16.032: INFO: kube-proxy-bootstrap-e2e-minion-group-5kqh started at 2023-01-28 22:54:24 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.032: INFO: Container kube-proxy ready: false, restart count 8 Jan 28 23:26:16.032: INFO: l7-default-backend-8549d69d99-wjzcg started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.032: INFO: Container default-http-backend ready: true, restart count 2 Jan 28 23:26:16.032: INFO: kube-dns-autoscaler-5f6455f985-94k5n started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.032: INFO: Container autoscaler ready: false, restart count 5 Jan 28 23:26:16.032: INFO: volume-snapshot-controller-0 started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.032: INFO: Container volume-snapshot-controller ready: false, restart count 12 Jan 28 23:26:16.032: INFO: coredns-6846b5b5f-gmtb4 started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.032: INFO: Container coredns ready: true, restart count 9 Jan 28 23:26:16.032: INFO: metadata-proxy-v0.1-5d8kv started at 2023-01-28 22:54:25 +0000 UTC (0+2 container statuses recorded) Jan 28 23:26:16.032: INFO: Container metadata-proxy ready: true, restart count 2 Jan 28 23:26:16.032: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 28 23:26:16.032: INFO: konnectivity-agent-jk72b started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.032: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 28 23:26:16.211: INFO: Latency metrics for node bootstrap-e2e-minion-group-5kqh Jan 28 23:26:16.211: INFO: Logging node info for node bootstrap-e2e-minion-group-v2xx Jan 28 23:26:16.254: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-v2xx 8d3b42ad-eaa3-4569-8913-6869a3343290 3852 0 2023-01-28 22:54:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-v2xx kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 22:54:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 23:21:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 23:23:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 23:23:24 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 23:23:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-12/us-west1-b/bootstrap-e2e-minion-group-v2xx,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 23:23:12 +0000 UTC,LastTransitionTime:2023-01-28 23:17:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 23:23:12 +0000 UTC,LastTransitionTime:2023-01-28 23:17:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 23:23:12 +0000 UTC,LastTransitionTime:2023-01-28 23:17:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 23:23:12 +0000 UTC,LastTransitionTime:2023-01-28 23:17:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 23:23:12 +0000 UTC,LastTransitionTime:2023-01-28 23:17:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 23:23:12 +0000 UTC,LastTransitionTime:2023-01-28 23:17:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 23:23:12 +0000 UTC,LastTransitionTime:2023-01-28 23:17:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 22:54:40 +0000 UTC,LastTransitionTime:2023-01-28 22:54:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:24 +0000 UTC,LastTransitionTime:2023-01-28 23:23:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:24 +0000 UTC,LastTransitionTime:2023-01-28 23:23:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:24 +0000 UTC,LastTransitionTime:2023-01-28 23:23:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 23:23:24 +0000 UTC,LastTransitionTime:2023-01-28 23:23:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.145.43.141,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-v2xx.c.k8s-boskos-gce-project-12.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-v2xx.c.k8s-boskos-gce-project-12.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ee1a644c4c428a8b9c148cb020481c61,SystemUUID:ee1a644c-4c42-8a8b-9c14-8cb020481c61,BootID:c9e9ca8f-52bf-46e3-b53e-f577aaa102b2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 23:26:16.254: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-v2xx Jan 28 23:26:16.311: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-v2xx Jan 28 23:26:16.405: INFO: konnectivity-agent-btst9 started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.405: INFO: Container konnectivity-agent ready: false, restart count 2 Jan 28 23:26:16.405: INFO: coredns-6846b5b5f-m4glj started at 2023-01-28 22:54:45 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.405: INFO: Container coredns ready: true, restart count 10 Jan 28 23:26:16.405: INFO: kube-proxy-bootstrap-e2e-minion-group-v2xx started at 2023-01-28 22:54:21 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.405: INFO: Container kube-proxy ready: true, restart count 5 Jan 28 23:26:16.405: INFO: metadata-proxy-v0.1-cm88n started at 2023-01-28 22:54:22 +0000 UTC (0+2 container statuses recorded) Jan 28 23:26:16.405: INFO: Container metadata-proxy ready: true, restart count 3 Jan 28 23:26:16.405: INFO: Container prometheus-to-sd-exporter ready: true, restart count 3 Jan 28 23:26:16.566: INFO: Latency metrics for node bootstrap-e2e-minion-group-v2xx Jan 28 23:26:16.566: INFO: Logging node info for node bootstrap-e2e-minion-group-z2p7 Jan 28 23:26:16.608: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-z2p7 bfe9b4b0-043a-4f06-a0c0-cc180155d59d 3851 0 2023-01-28 22:54:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-z2p7 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 22:54:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 23:22:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 23:23:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 23:23:25 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 23:23:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-12/us-west1-b/bootstrap-e2e-minion-group-z2p7,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 23:23:16 +0000 UTC,LastTransitionTime:2023-01-28 23:17:44 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 23:23:16 +0000 UTC,LastTransitionTime:2023-01-28 23:17:44 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 23:23:16 +0000 UTC,LastTransitionTime:2023-01-28 23:17:44 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 23:23:16 +0000 UTC,LastTransitionTime:2023-01-28 23:17:44 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 23:23:16 +0000 UTC,LastTransitionTime:2023-01-28 23:17:44 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 23:23:16 +0000 UTC,LastTransitionTime:2023-01-28 23:17:44 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 23:23:16 +0000 UTC,LastTransitionTime:2023-01-28 23:17:44 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 22:54:40 +0000 UTC,LastTransitionTime:2023-01-28 22:54:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:25 +0000 UTC,LastTransitionTime:2023-01-28 23:23:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:25 +0000 UTC,LastTransitionTime:2023-01-28 23:23:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:25 +0000 UTC,LastTransitionTime:2023-01-28 23:23:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 23:23:25 +0000 UTC,LastTransitionTime:2023-01-28 23:23:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.4.157,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-z2p7.c.k8s-boskos-gce-project-12.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-z2p7.c.k8s-boskos-gce-project-12.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:acd066cc5d3d0c26751e787888eec6d0,SystemUUID:acd066cc-5d3d-0c26-751e-787888eec6d0,BootID:33043795-fc7a-4d49-8c43-c5f2544df172,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 23:26:16.608: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-z2p7 Jan 28 23:26:16.661: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-z2p7 Jan 28 23:26:16.723: INFO: kube-proxy-bootstrap-e2e-minion-group-z2p7 started at 2023-01-28 22:54:22 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.723: INFO: Container kube-proxy ready: true, restart count 8 Jan 28 23:26:16.723: INFO: metadata-proxy-v0.1-lw5t2 started at 2023-01-28 22:54:23 +0000 UTC (0+2 container statuses recorded) Jan 28 23:26:16.723: INFO: Container metadata-proxy ready: true, restart count 3 Jan 28 23:26:16.723: INFO: Container prometheus-to-sd-exporter ready: true, restart count 3 Jan 28 23:26:16.723: INFO: konnectivity-agent-h2g89 started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.723: INFO: Container konnectivity-agent ready: true, restart count 10 Jan 28 23:26:16.723: INFO: metrics-server-v0.5.2-867b8754b9-v2r9c started at 2023-01-28 22:54:59 +0000 UTC (0+2 container statuses recorded) Jan 28 23:26:16.723: INFO: Container metrics-server ready: false, restart count 13 Jan 28 23:26:16.723: INFO: Container metrics-server-nanny ready: false, restart count 13 Jan 28 23:26:16.897: INFO: Latency metrics for node bootstrap-e2e-minion-group-z2p7 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 23:26:16.897 (1.542s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 23:26:16.897 (1.543s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 23:26:16.897 STEP: Destroying namespace "reboot-6844" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 23:26:16.897 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 23:26:16.941 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 23:26:16.942 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 23:26:16.942 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 23:26:15.233from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 23:21:12.892 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 23:21:12.892 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 23:21:12.892 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 23:21:12.892 Jan 28 23:21:12.892: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 23:21:12.894 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 23:21:13.019 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 23:21:13.1 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 23:21:13.181 (289ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 23:21:13.181 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 23:21:13.181 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/28/23 23:21:13.181 Jan 28 23:21:13.276: INFO: Getting bootstrap-e2e-minion-group-5kqh Jan 28 23:21:13.276: INFO: Getting bootstrap-e2e-minion-group-z2p7 Jan 28 23:21:13.276: INFO: Getting bootstrap-e2e-minion-group-v2xx Jan 28 23:21:13.319: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-5kqh condition Ready to be true Jan 28 23:21:13.351: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-z2p7 condition Ready to be true Jan 28 23:21:13.351: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-v2xx condition Ready to be true Jan 28 23:21:13.361: INFO: Node bootstrap-e2e-minion-group-5kqh has 4 assigned pods with no liveness probes: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-94k5n kube-proxy-bootstrap-e2e-minion-group-5kqh metadata-proxy-v0.1-5d8kv] Jan 28 23:21:13.361: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-94k5n kube-proxy-bootstrap-e2e-minion-group-5kqh metadata-proxy-v0.1-5d8kv] Jan 28 23:21:13.361: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5d8kv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:21:13.361: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-94k5n" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:21:13.361: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:21:13.362: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-5kqh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:21:13.394: INFO: Node bootstrap-e2e-minion-group-v2xx has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-cm88n kube-proxy-bootstrap-e2e-minion-group-v2xx] Jan 28 23:21:13.394: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-cm88n kube-proxy-bootstrap-e2e-minion-group-v2xx] Jan 28 23:21:13.394: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-v2xx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:21:13.395: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-cm88n" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:21:13.395: INFO: Node bootstrap-e2e-minion-group-z2p7 has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-lw5t2 kube-proxy-bootstrap-e2e-minion-group-z2p7] Jan 28 23:21:13.395: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-lw5t2 kube-proxy-bootstrap-e2e-minion-group-z2p7] Jan 28 23:21:13.395: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-z2p7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:21:13.395: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-lw5t2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:21:13.405: INFO: Pod "metadata-proxy-v0.1-5d8kv": Phase="Running", Reason="", readiness=true. Elapsed: 43.201798ms Jan 28 23:21:13.405: INFO: Pod "metadata-proxy-v0.1-5d8kv" satisfied condition "running and ready, or succeeded" Jan 28 23:21:13.406: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.337763ms Jan 28 23:21:13.406: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:13.407: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 44.921483ms Jan 28 23:21:13.407: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:13.407: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 45.931318ms Jan 28 23:21:13.407: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:13.439: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-v2xx": Phase="Running", Reason="", readiness=true. Elapsed: 44.814832ms Jan 28 23:21:13.439: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-v2xx" satisfied condition "running and ready, or succeeded" Jan 28 23:21:13.441: INFO: Pod "metadata-proxy-v0.1-lw5t2": Phase="Running", Reason="", readiness=true. Elapsed: 46.174959ms Jan 28 23:21:13.441: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z2p7": Phase="Running", Reason="", readiness=true. Elapsed: 46.267609ms Jan 28 23:21:13.441: INFO: Pod "metadata-proxy-v0.1-lw5t2" satisfied condition "running and ready, or succeeded" Jan 28 23:21:13.441: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z2p7" satisfied condition "running and ready, or succeeded" Jan 28 23:21:13.441: INFO: Pod "metadata-proxy-v0.1-cm88n": Phase="Running", Reason="", readiness=true. Elapsed: 46.705561ms Jan 28 23:21:13.441: INFO: Pod "metadata-proxy-v0.1-cm88n" satisfied condition "running and ready, or succeeded" Jan 28 23:21:13.441: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-lw5t2 kube-proxy-bootstrap-e2e-minion-group-z2p7] Jan 28 23:21:13.441: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-cm88n kube-proxy-bootstrap-e2e-minion-group-v2xx] Jan 28 23:21:13.441: INFO: Getting external IP address for bootstrap-e2e-minion-group-z2p7 Jan 28 23:21:13.441: INFO: Getting external IP address for bootstrap-e2e-minion-group-v2xx Jan 28 23:21:13.441: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-v2xx(34.145.43.141:22) Jan 28 23:21:13.441: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-z2p7(34.168.4.157:22) Jan 28 23:21:13.978: INFO: ssh prow@34.168.4.157:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 23:21:13.978: INFO: ssh prow@34.168.4.157:22: stdout: "" Jan 28 23:21:13.978: INFO: ssh prow@34.168.4.157:22: stderr: "" Jan 28 23:21:13.978: INFO: ssh prow@34.168.4.157:22: exit code: 0 Jan 28 23:21:13.978: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-z2p7 condition Ready to be false Jan 28 23:21:13.990: INFO: ssh prow@34.145.43.141:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 23:21:13.990: INFO: ssh prow@34.145.43.141:22: stdout: "" Jan 28 23:21:13.990: INFO: ssh prow@34.145.43.141:22: stderr: "" Jan 28 23:21:13.990: INFO: ssh prow@34.145.43.141:22: exit code: 0 Jan 28 23:21:13.990: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-v2xx condition Ready to be false Jan 28 23:21:14.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:14.032: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:15.451: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.089890675s Jan 28 23:21:15.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:15.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2.090080817s Jan 28 23:21:15.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:15.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 2.090228572s Jan 28 23:21:15.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:16.065: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:16.075: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:17.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4.08846323s Jan 28 23:21:17.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:17.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 4.089064171s Jan 28 23:21:17.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:17.451: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.089911317s Jan 28 23:21:17.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:18.108: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:18.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:19.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.086888207s Jan 28 23:21:19.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:19.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 6.089145s Jan 28 23:21:19.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 6.090004572s Jan 28 23:21:19.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:19.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:20.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:20.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:21.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.087095879s Jan 28 23:21:21.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:21.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 8.090205186s Jan 28 23:21:21.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:21.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 8.08948459s Jan 28 23:21:21.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:22.194: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:22.207: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:23.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 10.089001261s Jan 28 23:21:23.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:23.452: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.090190122s Jan 28 23:21:23.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:23.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 10.089498278s Jan 28 23:21:23.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:24.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:24.250: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:25.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 12.089183006s Jan 28 23:21:25.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:25.452: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.090398063s Jan 28 23:21:25.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:25.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 12.089698854s Jan 28 23:21:25.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:26.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:26.294: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:27.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 14.089591259s Jan 28 23:21:27.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:27.452: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.090744678s Jan 28 23:21:27.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:27.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 14.089994759s Jan 28 23:21:27.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:28.325: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:28.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:29.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.086450896s Jan 28 23:21:29.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:29.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 16.090349393s Jan 28 23:21:29.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:29.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 16.089585307s Jan 28 23:21:29.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:30.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:30.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:31.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.087922796s Jan 28 23:21:31.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:31.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 18.089613366s Jan 28 23:21:31.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:31.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 18.089868775s Jan 28 23:21:31.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:32.411: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:32.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:33.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.086777933s Jan 28 23:21:33.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:33.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 20.089285584s Jan 28 23:21:33.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:33.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 20.089675449s Jan 28 23:21:33.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:34.454: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:34.466: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:35.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.086308039s Jan 28 23:21:35.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:35.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 22.089395923s Jan 28 23:21:35.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:35.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 22.090362952s Jan 28 23:21:35.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:36.512: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:36.525: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:37.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.08673948s Jan 28 23:21:37.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:37.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 24.09001795s Jan 28 23:21:37.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:37.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 24.089306747s Jan 28 23:21:37.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:38.556: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:38.567: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:39.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 26.088957137s Jan 28 23:21:39.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:39.452: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.090072225s Jan 28 23:21:39.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:39.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 26.089328486s Jan 28 23:21:39.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:40.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:40.610: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:41.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.086661836s Jan 28 23:21:41.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:41.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 28.090198285s Jan 28 23:21:41.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:41.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 28.089482797s Jan 28 23:21:41.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:42.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:42.654: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:43.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.087129625s Jan 28 23:21:43.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:43.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 30.08904718s Jan 28 23:21:43.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 30.089907496s Jan 28 23:21:43.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:43.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:44.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:44.697: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:45.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.086349379s Jan 28 23:21:45.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:45.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 32.089653392s Jan 28 23:21:45.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 32.090502779s Jan 28 23:21:45.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:45.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:46.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:46.739: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:47.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 34.089373824s Jan 28 23:21:47.451: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.089360717s Jan 28 23:21:47.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:47.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:47.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 34.08999164s Jan 28 23:21:47.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:48.773: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:48.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:49.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.086818615s Jan 28 23:21:49.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:49.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 36.088729221s Jan 28 23:21:49.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 36.089561928s Jan 28 23:21:49.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:49.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:50.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:50.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:51.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.086352546s Jan 28 23:21:51.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:51.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 38.090261787s Jan 28 23:21:51.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 38.08942935s Jan 28 23:21:51.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:51.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:52.861: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:52.869: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:53.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 40.089803768s Jan 28 23:21:53.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:53.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 40.090240295s Jan 28 23:21:53.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:53.453: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.091134889s Jan 28 23:21:53.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:54.929: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:54.929: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:55.447: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.086024713s Jan 28 23:21:55.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:55.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 42.088723333s Jan 28 23:21:55.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 42.08955862s Jan 28 23:21:55.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:55.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:56.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:56.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:57.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.08615709s Jan 28 23:21:57.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:57.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 44.089006243s Jan 28 23:21:57.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:57.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 44.089179134s Jan 28 23:21:57.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:59.018: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-v2xx condition Ready to be true Jan 28 23:21:59.018: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:21:59.060: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 23:21:59.447: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.08598007s Jan 28 23:21:59.447: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:21:59.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 46.089385375s Jan 28 23:21:59.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 46.088518561s Jan 28 23:21:59.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:21:59.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:01.062: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:22:01.103: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 23:22:01.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.086620614s Jan 28 23:22:01.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:01.453: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 48.09157134s Jan 28 23:22:01.453: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 48.090766618s Jan 28 23:22:01.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:01.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:03.105: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:22:03.147: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 23:22:03.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.087120679s Jan 28 23:22:03.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:03.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 50.089763161s Jan 28 23:22:03.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:03.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 50.089962586s Jan 28 23:22:03.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:05.148: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-z2p7 condition Ready to be true Jan 28 23:22:05.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:05.191: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 23:22:05.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.087865496s Jan 28 23:22:05.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:05.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 52.089431952s Jan 28 23:22:05.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:05.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 52.089868516s Jan 28 23:22:05.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:07.231: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:07.234: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 23:22:07.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.086344418s Jan 28 23:22:07.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:07.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 54.089091794s Jan 28 23:22:07.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:07.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 54.089382116s Jan 28 23:22:07.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:09.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:09.276: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 23:22:09.447: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.08605218s Jan 28 23:22:09.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:09.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 56.088839163s Jan 28 23:22:09.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:09.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 56.088967445s Jan 28 23:22:09.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:11.320: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 23:22:11.320: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:11.450: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.088112295s Jan 28 23:22:11.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:11.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 58.089611622s Jan 28 23:22:11.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 58.090460128s Jan 28 23:22:11.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:11.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:13.366: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 23:22:13.366: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:13.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.086627503s Jan 28 23:22:13.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:13.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.089323875s Jan 28 23:22:13.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:13.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.089774044s Jan 28 23:22:13.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:15.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:15.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:15.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.086979732s Jan 28 23:22:15.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:15.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.089089425s Jan 28 23:22:15.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.089923543s Jan 28 23:22:15.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:15.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:17.451: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.090060231s Jan 28 23:22:17.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:17.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.089368094s Jan 28 23:22:17.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:17.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.090342493s Jan 28 23:22:17.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:17.456: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:17.456: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:19.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.089276915s Jan 28 23:22:19.452: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.090090727s Jan 28 23:22:19.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:19.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:19.453: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.091161213s Jan 28 23:22:19.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:19.500: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:19.500: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:21.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.086275729s Jan 28 23:22:21.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:21.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.090380986s Jan 28 23:22:21.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:21.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.089624808s Jan 28 23:22:21.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:21.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:21.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:23.462: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.100337802s Jan 28 23:22:23.462: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:23.463: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.101243476s Jan 28 23:22:23.463: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:23.464: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.101693068s Jan 28 23:22:23.464: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:23.588: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:23.588: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:25.462: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.100571145s Jan 28 23:22:25.462: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:25.463: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.102034687s Jan 28 23:22:25.463: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.101207251s Jan 28 23:22:25.463: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:25.463: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:25.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:25.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:27.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.086798982s Jan 28 23:22:27.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:27.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.088289668s Jan 28 23:22:27.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:27.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.088772084s Jan 28 23:22:27.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:27.681: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:27.681: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:29.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.086575277s Jan 28 23:22:29.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:29.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.09040024s Jan 28 23:22:29.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.089567489s Jan 28 23:22:29.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:29.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:29.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:29.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:31.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.086452139s Jan 28 23:22:31.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:31.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.090090436s Jan 28 23:22:31.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:31.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.089315512s Jan 28 23:22:31.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:31.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:31.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:33.451: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.089602844s Jan 28 23:22:33.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:33.453: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.091868007s Jan 28 23:22:33.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:33.454: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.092084885s Jan 28 23:22:33.454: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:33.816: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:33.816: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:35.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.086986928s Jan 28 23:22:35.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:35.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.088850984s Jan 28 23:22:35.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.089700433s Jan 28 23:22:35.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:35.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:35.862: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:35.862: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:37.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.089954435s Jan 28 23:22:37.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:37.453: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.090356074s Jan 28 23:22:37.453: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.091177124s Jan 28 23:22:37.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:37.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:37.908: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:37.908: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:39.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.086839194s Jan 28 23:22:39.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:39.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.090554053s Jan 28 23:22:39.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.089749358s Jan 28 23:22:39.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:39.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:39.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:39.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:41.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.086998321s Jan 28 23:22:41.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:41.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.088979652s Jan 28 23:22:41.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.08982781s Jan 28 23:22:41.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:41.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:41.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:41.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:43.451: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.089086462s Jan 28 23:22:43.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:43.453: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.091396627s Jan 28 23:22:43.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:43.453: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.09072162s Jan 28 23:22:43.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:44.044: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:44.044: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:45.474: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.112363086s Jan 28 23:22:45.474: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:45.474: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.112549742s Jan 28 23:22:45.474: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.111755935s Jan 28 23:22:45.474: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:45.474: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:46.089: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:46.089: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:47.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.089157992s Jan 28 23:22:47.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:47.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.089630928s Jan 28 23:22:47.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:47.452: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.090470803s Jan 28 23:22:47.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:48.137: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:48.137: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:49.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.086340457s Jan 28 23:22:49.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:49.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.089049523s Jan 28 23:22:49.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:49.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.089509142s Jan 28 23:22:49.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:50.182: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:50.182: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:51.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.086968391s Jan 28 23:22:51.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:51.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.089496243s Jan 28 23:22:51.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:51.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.088734125s Jan 28 23:22:51.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:52.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:52.240: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:53.447: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.085884085s Jan 28 23:22:53.447: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:53.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.089621544s Jan 28 23:22:53.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.088803776s Jan 28 23:22:53.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:53.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:54.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:54.284: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:55.450: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.088359413s Jan 28 23:22:55.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:55.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.090364044s Jan 28 23:22:55.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:55.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.089682502s Jan 28 23:22:55.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:56.328: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:56.328: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:57.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.0865798s Jan 28 23:22:57.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:57.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.09047317s Jan 28 23:22:57.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:57.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.089708558s Jan 28 23:22:57.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:22:58.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:22:58.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:22:59.447: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.085847265s Jan 28 23:22:59.447: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:59.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.089660039s Jan 28 23:22:59.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:22:59.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.088927627s Jan 28 23:22:59.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:23:00.418: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:00.418: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:01.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.086428556s Jan 28 23:23:01.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:01.451: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.08922223s Jan 28 23:23:01.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-5kqh' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:20:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:24 +0000 UTC }] Jan 28 23:23:01.452: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.090104058s Jan 28 23:23:01.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:02.463: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:02.463: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:03.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.087982071s Jan 28 23:23:03.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:03.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.089525032s Jan 28 23:23:03.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:03.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=true. Elapsed: 1m50.09000226s Jan 28 23:23:03.452: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh" satisfied condition "running and ready, or succeeded" Jan 28 23:23:04.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:04.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:05.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.087072083s Jan 28 23:23:05.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:05.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.088764804s Jan 28 23:23:05.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:06.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:06.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:07.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.087892975s Jan 28 23:23:07.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:07.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.088995073s Jan 28 23:23:07.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:08.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:08.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:09.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.086132896s Jan 28 23:23:09.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:09.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.087626218s Jan 28 23:23:09.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:10.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:10.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:11.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.08627741s Jan 28 23:23:11.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:11.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.088154915s Jan 28 23:23:11.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:12.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:12.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:13.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.089412341s Jan 28 23:23:13.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:13.452: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.090577186s Jan 28 23:23:13.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:14.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:14.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:15.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.086410107s Jan 28 23:23:15.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:15.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.088146766s Jan 28 23:23:15.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:16.785: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:16.785: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:17.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.086532367s Jan 28 23:23:17.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:17.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.089751446s Jan 28 23:23:17.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:18.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:18.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:19.450: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.088488222s Jan 28 23:23:19.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:19.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.088623695s Jan 28 23:23:19.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:20.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:20.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:21.449: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.087267986s Jan 28 23:23:21.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:21.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.088252243s Jan 28 23:23:21.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:22.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:21:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:22.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:23.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.089257968s Jan 28 23:23:23.451: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.089240217s Jan 28 23:23:23.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:23.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:24.965: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:24.965: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 23:22:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:25.507: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.145325846s Jan 28 23:23:25.507: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:25.507: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.145480523s Jan 28 23:23:25.507: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:27.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:03 +0000 UTC}]. Failure Jan 28 23:23:27.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 23:22:13 +0000 UTC}]. Failure Jan 28 23:23:27.456: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.094838094s Jan 28 23:23:27.456: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:27.456: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.095001814s Jan 28 23:23:27.456: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:29.054: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-lw5t2 kube-proxy-bootstrap-e2e-minion-group-z2p7] Jan 28 23:23:29.054: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-z2p7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:23:29.054: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-cm88n kube-proxy-bootstrap-e2e-minion-group-v2xx] Jan 28 23:23:29.054: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-v2xx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:23:29.054: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-cm88n" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:23:29.054: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-lw5t2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:23:29.101: INFO: Pod "metadata-proxy-v0.1-cm88n": Phase="Running", Reason="", readiness=true. Elapsed: 46.532709ms Jan 28 23:23:29.101: INFO: Pod "metadata-proxy-v0.1-cm88n" satisfied condition "running and ready, or succeeded" Jan 28 23:23:29.101: INFO: Pod "metadata-proxy-v0.1-lw5t2": Phase="Running", Reason="", readiness=true. Elapsed: 46.600364ms Jan 28 23:23:29.101: INFO: Pod "metadata-proxy-v0.1-lw5t2" satisfied condition "running and ready, or succeeded" Jan 28 23:23:29.101: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z2p7": Phase="Running", Reason="", readiness=true. Elapsed: 46.868484ms Jan 28 23:23:29.101: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z2p7" satisfied condition "running and ready, or succeeded" Jan 28 23:23:29.101: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-lw5t2 kube-proxy-bootstrap-e2e-minion-group-z2p7] Jan 28 23:23:29.101: INFO: Reboot successful on node bootstrap-e2e-minion-group-z2p7 Jan 28 23:23:29.101: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-v2xx": Phase="Running", Reason="", readiness=true. Elapsed: 46.916985ms Jan 28 23:23:29.101: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-v2xx" satisfied condition "running and ready, or succeeded" Jan 28 23:23:29.101: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-cm88n kube-proxy-bootstrap-e2e-minion-group-v2xx] Jan 28 23:23:29.101: INFO: Reboot successful on node bootstrap-e2e-minion-group-v2xx Jan 28 23:23:29.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.08682045s Jan 28 23:23:29.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:29.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.088365469s Jan 28 23:23:29.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:31.447: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.085808088s Jan 28 23:23:31.447: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:31.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.087813639s Jan 28 23:23:31.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:33.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.086276815s Jan 28 23:23:33.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:33.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.088060214s Jan 28 23:23:33.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:35.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.086257004s Jan 28 23:23:35.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:35.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.087839107s Jan 28 23:23:35.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:37.452: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.090397607s Jan 28 23:23:37.452: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:37.453: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.091754497s Jan 28 23:23:37.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:39.447: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.08604726s Jan 28 23:23:39.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:39.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.087616709s Jan 28 23:23:39.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:41.448: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.086253338s Jan 28 23:23:41.448: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:19:19 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:23:41.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.088141774s Jan 28 23:23:41.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:24:43.449: INFO: Retryable error while getting pod kube-system/kube-dns-autoscaler-5f6455f985-94k5n, retrying after 0s: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-dns-autoscaler-5f6455f985-94k5n) Jan 28 23:24:43.452: INFO: Encountered non-retryable error while getting pod kube-system/volume-snapshot-controller-0: Get "https://34.83.136.180/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0": stream error: stream ID 2305; INTERNAL_ERROR; received from peer Jan 28 23:24:43.452: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 28 23:25:32.937: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m19.576030132s Jan 28 23:25:32.938: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:33.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.088873796s Jan 28 23:25:33.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:35.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.087850987s Jan 28 23:25:35.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:37.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.087990658s Jan 28 23:25:37.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:39.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.088143192s Jan 28 23:25:39.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:41.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.087830369s Jan 28 23:25:41.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:43.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.089973866s Jan 28 23:25:43.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:45.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.08857565s Jan 28 23:25:45.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:47.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.088048058s Jan 28 23:25:47.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:49.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.088087553s Jan 28 23:25:49.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:51.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.087995979s Jan 28 23:25:51.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:53.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.088019999s Jan 28 23:25:53.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:55.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.088316884s Jan 28 23:25:55.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:57.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.088536466s Jan 28 23:25:57.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:25:59.451: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.089193218s Jan 28 23:25:59.451: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:26:01.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.088172396s Jan 28 23:26:01.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:26:03.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.088203259s Jan 28 23:26:03.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:26:05.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.088316049s Jan 28 23:26:05.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:26:07.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.087260597s Jan 28 23:26:07.449: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:26:09.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.088797038s Jan 28 23:26:09.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:26:11.449: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.08804626s Jan 28 23:26:11.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards (Spec Runtime: 5m0.29s) test/e2e/cloud/gcp/reboot.go:136 In [It] (Node Runtime: 5m0.001s) test/e2e/cloud/gcp/reboot.go:136 Spec Goroutine goroutine 8090 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc003966558?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f4ef8098238?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f4ef8098238?, 0xc000e69f40}, {0x8147108?, 0xc00405c4e0}, {0xc003fe21a0, 0x182}, 0xc003936e10) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.7({0x7f4ef8098238, 0xc000e69f40}) test/e2e/cloud/gcp/reboot.go:141 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc000e69f40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 8092 [chan receive, 2 minutes] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x7f4ef8098238?, 0xc000e69f40}, {0x8147108?, 0xc00405c4e0}, {0x76d190b, 0xb}, {0xc004fcb780, 0x4, 0x4}, 0x45d964b800, ...) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReadyOrSucceeded(...) test/e2e/framework/pod/resource.go:508 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f4ef8098238, 0xc000e69f40}, {0x8147108, 0xc00405c4e0}, {0x7ffd2e7ee5ee, 0x3}, {0xc003924780, 0x1f}, {0xc003fe21a0, 0x182}) test/e2e/cloud/gcp/reboot.go:284 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 23:26:13.450: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.088751456s Jan 28 23:26:13.450: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:26:13.492: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.130172191s Jan 28 23:26:13.492: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:26:13.492: INFO: Pod kube-dns-autoscaler-5f6455f985-94k5n failed to be running and ready, or succeeded. Jan 28 23:26:13.492: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-94k5n kube-proxy-bootstrap-e2e-minion-group-5kqh metadata-proxy-v0.1-5d8kv] Jan 28 23:26:13.492: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:19:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:19:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:40 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.3.39 PodIPs:[{IP:10.64.3.39}] StartTime:2023-01-28 22:54:40 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 5m0s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(5e06e33a-3aff-4f65-9b6b-f080476a8d59),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 23:18:16 +0000 UTC,FinishedAt:2023-01-28 23:19:18 +0000 UTC,ContainerID:containerd://868e2a0beaba251677d7fb52467c5526086099e08cb5aeb6814d885933c8508e,}} Ready:false RestartCount:11 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://868e2a0beaba251677d7fb52467c5526086099e08cb5aeb6814d885933c8508e Started:0xc00590abdf}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 28 23:26:13.551: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller: I0128 23:24:30.547274 1 main.go:125] Version: v6.1.0 I0128 23:24:30.550276 1 main.go:168] Metrics path successfully registered at /metrics I0128 23:24:30.550599 1 main.go:174] Start NewCSISnapshotController with kubeconfig [] resyncPeriod [15m0s] I0128 23:25:33.455640 1 main.go:224] Metrics http server successfully started on :9102, /metrics I0128 23:25:33.456108 1 reflector.go:221] Starting reflector *v1.VolumeSnapshotContent (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.456188 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.456374 1 reflector.go:221] Starting reflector *v1.VolumeSnapshotClass (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.456392 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.456778 1 reflector.go:221] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:134 I0128 23:25:33.456799 1 reflector.go:257] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134 I0128 23:25:33.456977 1 reflector.go:221] Starting reflector *v1.VolumeSnapshot (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.457037 1 reflector.go:257] Listing and watching *v1.VolumeSnapshot from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.457283 1 snapshot_controller_base.go:152] Starting snapshot controller I0128 23:25:33.557432 1 shared_informer.go:285] caches populated I0128 23:25:33.557478 1 snapshot_controller_base.go:509] controller initialized Jan 28 23:26:13.551: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller: I0128 23:24:30.547274 1 main.go:125] Version: v6.1.0 I0128 23:24:30.550276 1 main.go:168] Metrics path successfully registered at /metrics I0128 23:24:30.550599 1 main.go:174] Start NewCSISnapshotController with kubeconfig [] resyncPeriod [15m0s] I0128 23:25:33.455640 1 main.go:224] Metrics http server successfully started on :9102, /metrics I0128 23:25:33.456108 1 reflector.go:221] Starting reflector *v1.VolumeSnapshotContent (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.456188 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.456374 1 reflector.go:221] Starting reflector *v1.VolumeSnapshotClass (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.456392 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.456778 1 reflector.go:221] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:134 I0128 23:25:33.456799 1 reflector.go:257] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134 I0128 23:25:33.456977 1 reflector.go:221] Starting reflector *v1.VolumeSnapshot (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.457037 1 reflector.go:257] Listing and watching *v1.VolumeSnapshot from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 23:25:33.457283 1 snapshot_controller_base.go:152] Starting snapshot controller I0128 23:25:33.557432 1 shared_informer.go:285] caches populated I0128 23:25:33.557478 1 snapshot_controller_base.go:509] controller initialized Jan 28 23:26:13.551: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-94k5n: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:01:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:02:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:40 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP: PodIPs:[] StartTime:2023-01-28 22:54:40 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-28 23:00:44 +0000 UTC,FinishedAt:2023-01-28 23:01:16 +0000 UTC,ContainerID:containerd://6610b36ea376572aa9045552b2a3a3cde3a29846696ca9838eb92776847eed45,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:5 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://6610b36ea376572aa9045552b2a3a3cde3a29846696ca9838eb92776847eed45 Started:0xc00590a1e7}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 23:26:13.595: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-94k5n/autoscaler: Jan 28 23:26:13.595: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-94k5n/autoscaler: Jan 28 23:26:13.596: INFO: Status for not ready pod kube-system/kube-proxy-bootstrap-e2e-minion-group-5kqh: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:24 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:20:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:20:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:24 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.138.0.4 PodIPs:[{IP:10.138.0.4}] StartTime:2023-01-28 22:54:24 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-bootstrap-e2e-minion-group-5kqh_kube-system(64d3f4571520730431db78be9372bf75),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 23:16:35 +0000 UTC,FinishedAt:2023-01-28 23:20:11 +0000 UTC,ContainerID:containerd://0384496db30ce5af9fa5a8a09c892b80b367379e8589f5d3aaef58846eeb9301,}} Ready:false RestartCount:7 Image:registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2 ImageID:sha256:ef97fd17575d534d8bc2960bbf1e744379f3ac6e86b9b97974e086f1516b75e5 ContainerID:containerd://0384496db30ce5af9fa5a8a09c892b80b367379e8589f5d3aaef58846eeb9301 Started:0xc00590a46f}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 23:26:13.654: INFO: Retrieving log for container kube-system/kube-proxy-bootstrap-e2e-minion-group-5kqh/kube-proxy: Jan 28 23:26:13.654: INFO: Retrieving log for the last terminated container kube-system/kube-proxy-bootstrap-e2e-minion-group-5kqh/kube-proxy: Jan 28 23:26:13.654: INFO: Node bootstrap-e2e-minion-group-5kqh failed reboot test. Jan 28 23:26:13.654: INFO: Executing termination hook on nodes Jan 28 23:26:13.654: INFO: Getting external IP address for bootstrap-e2e-minion-group-5kqh Jan 28 23:26:13.654: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-5kqh(34.168.200.47:22) Jan 28 23:26:14.179: INFO: ssh prow@34.168.200.47:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 23:26:14.179: INFO: ssh prow@34.168.200.47:22: stdout: "" Jan 28 23:26:14.179: INFO: ssh prow@34.168.200.47:22: stderr: "cat: /tmp/drop-inbound.log: No such file or directory\n" Jan 28 23:26:14.179: INFO: ssh prow@34.168.200.47:22: exit code: 1 Jan 28 23:26:14.179: INFO: Error while issuing ssh command: failed running "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log": <nil> (exit code 1, stderr cat: /tmp/drop-inbound.log: No such file or directory ) Jan 28 23:26:14.179: INFO: Getting external IP address for bootstrap-e2e-minion-group-v2xx Jan 28 23:26:14.179: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-v2xx(34.145.43.141:22) Jan 28 23:26:14.704: INFO: ssh prow@34.145.43.141:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 23:26:14.704: INFO: ssh prow@34.145.43.141:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 23:21:23 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 23:26:14.704: INFO: ssh prow@34.145.43.141:22: stderr: "" Jan 28 23:26:14.704: INFO: ssh prow@34.145.43.141:22: exit code: 0 Jan 28 23:26:14.704: INFO: Getting external IP address for bootstrap-e2e-minion-group-z2p7 Jan 28 23:26:14.704: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-z2p7(34.168.4.157:22) Jan 28 23:26:15.233: INFO: ssh prow@34.168.4.157:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 23:26:15.233: INFO: ssh prow@34.168.4.157:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 23:21:23 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 23:26:15.233: INFO: ssh prow@34.168.4.157:22: stderr: "" Jan 28 23:26:15.233: INFO: ssh prow@34.168.4.157:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 23:26:15.233 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/28/23 23:26:15.233 (5m2.052s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 23:26:15.233 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 23:26:15.234 Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-gmtb4 to bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.949886764s (3.949897316s including waiting) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: Get "http://10.64.3.5:8181/ready": dial tcp 10.64.3.5:8181: connect: connection refused Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: Get "http://10.64.3.19:8181/ready": dial tcp 10.64.3.19:8181: connect: connection refused Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: Get "http://10.64.3.25:8181/ready": dial tcp 10.64.3.25:8181: connect: connection refused Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-gmtb4_kube-system(48008db0-bd58-4d0b-9f0f-1a30f9ae1eed) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: Get "http://10.64.3.28:8181/ready": dial tcp 10.64.3.28:8181: connect: connection refused Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-m4glj to bootstrap-e2e-minion-group-v2xx Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 971.107113ms (971.12427ms including waiting) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Killing: Stopping container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Unhealthy: Readiness probe failed: Get "http://10.64.0.3:8181/ready": dial tcp 10.64.0.3:8181: connect: connection refused Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Unhealthy: Liveness probe failed: Get "http://10.64.0.8:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Killing: Stopping container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-m4glj_kube-system(48c280c5-14bc-438a-86fa-1f138734ffe4) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Unhealthy: Readiness probe failed: Get "http://10.64.0.9:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-m4glj Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Unhealthy: Readiness probe failed: Get "http://10.64.0.12:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Unhealthy: Liveness probe failed: Get "http://10.64.0.12:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Killing: Container coredns failed liveness probe, will be restarted Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Killing: Stopping container coredns Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-gmtb4 Jan 28 23:26:15.308: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-m4glj Jan 28 23:26:15.308: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 23:26:15.308: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 23:26:15.308: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 23:26:15.308: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 23:26:15.308: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 23:26:15.308: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 28 23:26:15.308: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 28 23:26:15.308: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 28 23:26:15.308: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 23:26:15.308: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 23:26:15.308: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 23:26:15.308: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 28 23:26:15.308: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 28 23:26:15.308: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_4ce5d became leader Jan 28 23:26:15.308: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_a401b became leader Jan 28 23:26:15.308: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_85a06 became leader Jan 28 23:26:15.308: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_11417 became leader Jan 28 23:26:15.308: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_66efa became leader Jan 28 23:26:15.308: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_f2767 became leader Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-btst9 to bootstrap-e2e-minion-group-v2xx Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 603.127236ms (603.144594ms including waiting) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Killing: Stopping container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-btst9_kube-system(6650f946-87f1-464b-b8b7-08392ca3dbab) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-h2g89 to bootstrap-e2e-minion-group-z2p7 Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 666.826587ms (666.837294ms including waiting) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-h2g89_kube-system(f9bf502e-a58e-40db-b5b6-dfa14e5b7875) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "http://10.64.1.11:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-h2g89_kube-system(f9bf502e-a58e-40db-b5b6-dfa14e5b7875) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "http://10.64.1.19:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-jk72b to bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.919998904s (2.92000884s including waiting) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Liveness probe failed: Get "http://10.64.3.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container konnectivity-agent Jan 28 23:26:15.308: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-jk72b_kube-system(eacd1411-5c92-4ce8-bc32-8a79a0a0aac6) Jan 28 23:26:15.308: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-jk72b Jan 28 23:26:15.308: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-h2g89 Jan 28 23:26:15.308: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-btst9 Jan 28 23:26:15.308: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 23:26:15.308: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 23:26:15.308: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 28 23:26:15.308: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 28 23:26:15.308: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 28 23:26:15.308: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 28 23:26:15.308: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 28 23:26:15.308: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 28 23:26:15.308: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 28 23:26:15.308: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 28 23:26:15.308: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 23:26:15.308: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 23:26:15.308: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 28 23:26:15.308: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 23:26:15.308: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.308: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 28 23:26:15.308: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 28 23:26:15.308: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 28 23:26:15.308: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 28 23:26:15.308: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c7f3864a-79f1-4243-a016-abad9defaf85 became leader Jan 28 23:26:15.308: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_1607b7ec-e6bf-44d1-a209-56dc258333fe became leader Jan 28 23:26:15.308: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_cc0c1448-463a-48d0-91ef-9220541eaa8a became leader Jan 28 23:26:15.308: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_fbb70045-05fa-4f1e-93de-99c62df7bfea became leader Jan 28 23:26:15.308: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_f6be65e3-5140-4eda-a1cd-e7225bb4436d became leader Jan 28 23:26:15.308: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_174c2f28-14d6-48ac-bbc1-8a61bda0e4b7 became leader Jan 28 23:26:15.308: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_b77e8b6e-3335-4483-9b2d-499e7708a013 became leader Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-94k5n to bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.596877064s (1.596888989s including waiting) Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container autoscaler Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container autoscaler Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container autoscaler Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-94k5n_kube-system(a31058f2-55a7-4b22-9fb1-c421767f594c) Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container autoscaler Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container autoscaler Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container autoscaler Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-94k5n_kube-system(a31058f2-55a7-4b22-9fb1-c421767f594c) Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-94k5n Jan 28 23:26:15.308: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-5kqh_kube-system(64d3f4571520730431db78be9372bf75) Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Killing: Stopping container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-v2xx_kube-system(bb9deafc2cbae25454444f8cda5500ca) Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container kube-proxy Jan 28 23:26:15.308: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-z2p7_kube-system(e9c46e782bd92592f44f3dd337e30259) Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-z2p7_kube-system(e9c46e782bd92592f44f3dd337e30259) Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container kube-proxy Jan 28 23:26:15.309: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-z2p7_kube-system(e9c46e782bd92592f44f3dd337e30259) Jan 28 23:26:15.309: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:26:15.309: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 28 23:26:15.309: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 28 23:26:15.309: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 28 23:26:15.309: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 28 23:26:15.309: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_d08dc38d-6be6-4c10-9977-2e55c0f9654d became leader Jan 28 23:26:15.309: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_cd913c2f-e98e-43bb-98bc-df89dce0f7ee became leader Jan 28 23:26:15.309: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_b0c84497-1313-4390-b088-a16ae1e38e6c became leader Jan 28 23:26:15.309: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_78ae129e-0bc0-4959-bc28-a178c74018d1 became leader Jan 28 23:26:15.309: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_1c5f4221-88c4-4a3c-ab4b-7604fe80d908 became leader Jan 28 23:26:15.309: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_d0306157-7329-4b46-9892-d9cf18347643 became leader Jan 28 23:26:15.309: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_4b85754e-49bd-4e12-baec-261b6dcc046c became leader Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-wjzcg to bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 2.356180827s (2.356196114s including waiting) Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container default-http-backend Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container default-http-backend Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container default-http-backend Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container default-http-backend Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container default-http-backend Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container default-http-backend Jan 28 23:26:15.309: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-wjzcg Jan 28 23:26:15.309: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 28 23:26:15.309: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 28 23:26:15.309: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 28 23:26:15.309: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 28 23:26:15.309: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 28 23:26:15.309: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 28 23:26:15.309: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-2mtlx to bootstrap-e2e-master Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 788.057866ms (788.066097ms including waiting) Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.184224787s (2.184232084s including waiting) Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-5d8kv to bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 754.439073ms (754.451345ms including waiting) Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.98959934s (1.989628324s including waiting) Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-cm88n to bootstrap-e2e-minion-group-v2xx Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 779.269471ms (779.280127ms including waiting) Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.914880811s (1.914910128s including waiting) Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-lw5t2 to bootstrap-e2e-minion-group-z2p7 Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 733.778377ms (733.800063ms including waiting) Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.935483668s (1.935498891s including waiting) Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metadata-proxy Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container prometheus-to-sd-exporter Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-cm88n Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-lw5t2 Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-2mtlx Jan 28 23:26:15.309: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-5d8kv Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-x75mm to bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 4.05701627s (4.057042853s including waiting) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.333143952s (1.33319741s including waiting) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-x75mm Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-x75mm Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-v2r9c to bootstrap-e2e-minion-group-z2p7 Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.716154798s (1.716184984s including waiting) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.026232391s (1.026241999s including waiting) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "https://10.64.1.5:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "https://10.64.1.7:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.7:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.7:10250/readyz": dial tcp 10.64.1.7:10250: connect: connection refused Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-v2r9c_kube-system(b8856956-45a3-4c9e-a3fd-2359271a8fba) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-v2r9c_kube-system(b8856956-45a3-4c9e-a3fd-2359271a8fba) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.10:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-v2r9c Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.18:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "https://10.64.1.18:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container metrics-server Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container metrics-server-nanny Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-v2r9c_kube-system(b8856956-45a3-4c9e-a3fd-2359271a8fba) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.20:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-v2r9c Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 28 23:26:15.309: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.150769479s (2.150800929s including waiting) Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(5e06e33a-3aff-4f65-9b6b-f080476a8d59) Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(5e06e33a-3aff-4f65-9b6b-f080476a8d59) Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container volume-snapshot-controller Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(5e06e33a-3aff-4f65-9b6b-f080476a8d59) Jan 28 23:26:15.309: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 23:26:15.309 (76ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 23:26:15.309 Jan 28 23:26:15.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 23:26:15.354 (45ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 23:26:15.354 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 23:26:15.355 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 23:26:15.355 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 23:26:15.355 STEP: Collecting events from namespace "reboot-6844". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 23:26:15.355 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/28/23 23:26:15.399 Jan 28 23:26:15.440: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 23:26:15.440: INFO: Jan 28 23:26:15.483: INFO: Logging node info for node bootstrap-e2e-master Jan 28 23:26:15.527: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 4cdfb6e7-727d-421b-a4d7-efbd5562b935 3902 0 2023-01-28 22:54:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 22:54:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-28 22:54:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-28 22:54:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 23:25:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-12/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 22:54:40 +0000 UTC,LastTransitionTime:2023-01-28 22:54:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 23:25:48 +0000 UTC,LastTransitionTime:2023-01-28 22:54:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 23:25:48 +0000 UTC,LastTransitionTime:2023-01-28 22:54:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 23:25:48 +0000 UTC,LastTransitionTime:2023-01-28 22:54:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 23:25:48 +0000 UTC,LastTransitionTime:2023-01-28 22:54:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.136.180,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-12.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-12.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d617d4ca8a44383986c203bfbf0066d1,SystemUUID:d617d4ca-8a44-3839-86c2-03bfbf0066d1,BootID:8dbe1fa7-5a18-43b3-9fa4-081b8c329dab,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 23:26:15.527: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 28 23:26:15.575: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 28 23:26:15.640: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:15.640: INFO: Container konnectivity-server-container ready: true, restart count 2 Jan 28 23:26:15.640: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:15.640: INFO: Container kube-controller-manager ready: true, restart count 7 Jan 28 23:26:15.640: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-28 22:53:55 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:15.640: INFO: Container l7-lb-controller ready: true, restart count 8 Jan 28 23:26:15.640: INFO: metadata-proxy-v0.1-2mtlx started at 2023-01-28 22:54:23 +0000 UTC (0+2 container statuses recorded) Jan 28 23:26:15.640: INFO: Container metadata-proxy ready: true, restart count 0 Jan 28 23:26:15.640: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 28 23:26:15.640: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:15.640: INFO: Container etcd-container ready: true, restart count 3 Jan 28 23:26:15.640: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:15.640: INFO: Container etcd-container ready: true, restart count 6 Jan 28 23:26:15.640: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:15.640: INFO: Container kube-apiserver ready: true, restart count 1 Jan 28 23:26:15.640: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:15.640: INFO: Container kube-scheduler ready: false, restart count 6 Jan 28 23:26:15.640: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-28 22:53:55 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:15.640: INFO: Container kube-addon-manager ready: true, restart count 4 Jan 28 23:26:15.815: INFO: Latency metrics for node bootstrap-e2e-master Jan 28 23:26:15.815: INFO: Logging node info for node bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.869: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-5kqh 22141237-0160-4034-9e28-ae02d88cb4ba 3803 0 2023-01-28 22:54:24 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-5kqh kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 22:54:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 23:01:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 23:02:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-28 23:22:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-28 23:23:09 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-12/us-west1-b/bootstrap-e2e-minion-group-5kqh,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 23:22:46 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 23:22:46 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 23:22:46 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 23:22:46 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 23:22:46 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 23:22:46 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 23:22:46 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 22:54:40 +0000 UTC,LastTransitionTime:2023-01-28 22:54:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:09 +0000 UTC,LastTransitionTime:2023-01-28 23:02:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:09 +0000 UTC,LastTransitionTime:2023-01-28 23:02:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:09 +0000 UTC,LastTransitionTime:2023-01-28 23:02:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 23:23:09 +0000 UTC,LastTransitionTime:2023-01-28 23:02:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.200.47,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-5kqh.c.k8s-boskos-gce-project-12.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-5kqh.c.k8s-boskos-gce-project-12.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9b0656f82e87ede044202cdcb6f45e0d,SystemUUID:9b0656f8-2e87-ede0-4420-2cdcb6f45e0d,BootID:54027c00-e043-4e49-be7c-dd07f5d46486,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 23:26:15.869: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-5kqh Jan 28 23:26:15.926: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-5kqh Jan 28 23:26:16.032: INFO: kube-proxy-bootstrap-e2e-minion-group-5kqh started at 2023-01-28 22:54:24 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.032: INFO: Container kube-proxy ready: false, restart count 8 Jan 28 23:26:16.032: INFO: l7-default-backend-8549d69d99-wjzcg started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.032: INFO: Container default-http-backend ready: true, restart count 2 Jan 28 23:26:16.032: INFO: kube-dns-autoscaler-5f6455f985-94k5n started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.032: INFO: Container autoscaler ready: false, restart count 5 Jan 28 23:26:16.032: INFO: volume-snapshot-controller-0 started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.032: INFO: Container volume-snapshot-controller ready: false, restart count 12 Jan 28 23:26:16.032: INFO: coredns-6846b5b5f-gmtb4 started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.032: INFO: Container coredns ready: true, restart count 9 Jan 28 23:26:16.032: INFO: metadata-proxy-v0.1-5d8kv started at 2023-01-28 22:54:25 +0000 UTC (0+2 container statuses recorded) Jan 28 23:26:16.032: INFO: Container metadata-proxy ready: true, restart count 2 Jan 28 23:26:16.032: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 28 23:26:16.032: INFO: konnectivity-agent-jk72b started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.032: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 28 23:26:16.211: INFO: Latency metrics for node bootstrap-e2e-minion-group-5kqh Jan 28 23:26:16.211: INFO: Logging node info for node bootstrap-e2e-minion-group-v2xx Jan 28 23:26:16.254: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-v2xx 8d3b42ad-eaa3-4569-8913-6869a3343290 3852 0 2023-01-28 22:54:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-v2xx kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 22:54:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 23:21:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 23:23:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 23:23:24 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 23:23:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-12/us-west1-b/bootstrap-e2e-minion-group-v2xx,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 23:23:12 +0000 UTC,LastTransitionTime:2023-01-28 23:17:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 23:23:12 +0000 UTC,LastTransitionTime:2023-01-28 23:17:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 23:23:12 +0000 UTC,LastTransitionTime:2023-01-28 23:17:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 23:23:12 +0000 UTC,LastTransitionTime:2023-01-28 23:17:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 23:23:12 +0000 UTC,LastTransitionTime:2023-01-28 23:17:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 23:23:12 +0000 UTC,LastTransitionTime:2023-01-28 23:17:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 23:23:12 +0000 UTC,LastTransitionTime:2023-01-28 23:17:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 22:54:40 +0000 UTC,LastTransitionTime:2023-01-28 22:54:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:24 +0000 UTC,LastTransitionTime:2023-01-28 23:23:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:24 +0000 UTC,LastTransitionTime:2023-01-28 23:23:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:24 +0000 UTC,LastTransitionTime:2023-01-28 23:23:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 23:23:24 +0000 UTC,LastTransitionTime:2023-01-28 23:23:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.145.43.141,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-v2xx.c.k8s-boskos-gce-project-12.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-v2xx.c.k8s-boskos-gce-project-12.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ee1a644c4c428a8b9c148cb020481c61,SystemUUID:ee1a644c-4c42-8a8b-9c14-8cb020481c61,BootID:c9e9ca8f-52bf-46e3-b53e-f577aaa102b2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 23:26:16.254: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-v2xx Jan 28 23:26:16.311: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-v2xx Jan 28 23:26:16.405: INFO: konnectivity-agent-btst9 started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.405: INFO: Container konnectivity-agent ready: false, restart count 2 Jan 28 23:26:16.405: INFO: coredns-6846b5b5f-m4glj started at 2023-01-28 22:54:45 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.405: INFO: Container coredns ready: true, restart count 10 Jan 28 23:26:16.405: INFO: kube-proxy-bootstrap-e2e-minion-group-v2xx started at 2023-01-28 22:54:21 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.405: INFO: Container kube-proxy ready: true, restart count 5 Jan 28 23:26:16.405: INFO: metadata-proxy-v0.1-cm88n started at 2023-01-28 22:54:22 +0000 UTC (0+2 container statuses recorded) Jan 28 23:26:16.405: INFO: Container metadata-proxy ready: true, restart count 3 Jan 28 23:26:16.405: INFO: Container prometheus-to-sd-exporter ready: true, restart count 3 Jan 28 23:26:16.566: INFO: Latency metrics for node bootstrap-e2e-minion-group-v2xx Jan 28 23:26:16.566: INFO: Logging node info for node bootstrap-e2e-minion-group-z2p7 Jan 28 23:26:16.608: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-z2p7 bfe9b4b0-043a-4f06-a0c0-cc180155d59d 3851 0 2023-01-28 22:54:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-z2p7 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 22:54:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 23:22:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 23:23:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 23:23:25 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 23:23:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-12/us-west1-b/bootstrap-e2e-minion-group-z2p7,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 23:23:16 +0000 UTC,LastTransitionTime:2023-01-28 23:17:44 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 23:23:16 +0000 UTC,LastTransitionTime:2023-01-28 23:17:44 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 23:23:16 +0000 UTC,LastTransitionTime:2023-01-28 23:17:44 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 23:23:16 +0000 UTC,LastTransitionTime:2023-01-28 23:17:44 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 23:23:16 +0000 UTC,LastTransitionTime:2023-01-28 23:17:44 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 23:23:16 +0000 UTC,LastTransitionTime:2023-01-28 23:17:44 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 23:23:16 +0000 UTC,LastTransitionTime:2023-01-28 23:17:44 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 22:54:40 +0000 UTC,LastTransitionTime:2023-01-28 22:54:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:25 +0000 UTC,LastTransitionTime:2023-01-28 23:23:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:25 +0000 UTC,LastTransitionTime:2023-01-28 23:23:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 23:23:25 +0000 UTC,LastTransitionTime:2023-01-28 23:23:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 23:23:25 +0000 UTC,LastTransitionTime:2023-01-28 23:23:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.4.157,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-z2p7.c.k8s-boskos-gce-project-12.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-z2p7.c.k8s-boskos-gce-project-12.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:acd066cc5d3d0c26751e787888eec6d0,SystemUUID:acd066cc-5d3d-0c26-751e-787888eec6d0,BootID:33043795-fc7a-4d49-8c43-c5f2544df172,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 23:26:16.608: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-z2p7 Jan 28 23:26:16.661: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-z2p7 Jan 28 23:26:16.723: INFO: kube-proxy-bootstrap-e2e-minion-group-z2p7 started at 2023-01-28 22:54:22 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.723: INFO: Container kube-proxy ready: true, restart count 8 Jan 28 23:26:16.723: INFO: metadata-proxy-v0.1-lw5t2 started at 2023-01-28 22:54:23 +0000 UTC (0+2 container statuses recorded) Jan 28 23:26:16.723: INFO: Container metadata-proxy ready: true, restart count 3 Jan 28 23:26:16.723: INFO: Container prometheus-to-sd-exporter ready: true, restart count 3 Jan 28 23:26:16.723: INFO: konnectivity-agent-h2g89 started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:26:16.723: INFO: Container konnectivity-agent ready: true, restart count 10 Jan 28 23:26:16.723: INFO: metrics-server-v0.5.2-867b8754b9-v2r9c started at 2023-01-28 22:54:59 +0000 UTC (0+2 container statuses recorded) Jan 28 23:26:16.723: INFO: Container metrics-server ready: false, restart count 13 Jan 28 23:26:16.723: INFO: Container metrics-server-nanny ready: false, restart count 13 Jan 28 23:26:16.897: INFO: Latency metrics for node bootstrap-e2e-minion-group-z2p7 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 23:26:16.897 (1.542s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 23:26:16.897 (1.543s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 23:26:16.897 STEP: Destroying namespace "reboot-6844" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 23:26:16.897 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 23:26:16.941 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 23:26:16.942 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 23:26:16.942 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 23:09:54.265from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 23:07:35.444 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 23:07:35.445 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 23:07:35.445 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 23:07:35.445 Jan 28 23:07:35.445: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 23:07:35.447 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 23:07:35.573 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 23:07:35.654 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 23:07:35.734 (290ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 23:07:35.734 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 23:07:35.735 (0s) > Enter [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/28/23 23:07:35.735 Jan 28 23:07:35.829: INFO: Getting bootstrap-e2e-minion-group-v2xx Jan 28 23:07:35.829: INFO: Getting bootstrap-e2e-minion-group-z2p7 Jan 28 23:07:35.829: INFO: Getting bootstrap-e2e-minion-group-5kqh Jan 28 23:07:35.873: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-v2xx condition Ready to be true Jan 28 23:07:35.873: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-5kqh condition Ready to be true Jan 28 23:07:35.873: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-z2p7 condition Ready to be true Jan 28 23:07:35.917: INFO: Node bootstrap-e2e-minion-group-z2p7 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-z2p7 metadata-proxy-v0.1-lw5t2] Jan 28 23:07:35.917: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-z2p7 metadata-proxy-v0.1-lw5t2] Jan 28 23:07:35.917: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-lw5t2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:07:35.917: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-z2p7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:07:35.918: INFO: Node bootstrap-e2e-minion-group-v2xx has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-cm88n kube-proxy-bootstrap-e2e-minion-group-v2xx] Jan 28 23:07:35.918: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-cm88n kube-proxy-bootstrap-e2e-minion-group-v2xx] Jan 28 23:07:35.918: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-v2xx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:07:35.918: INFO: Node bootstrap-e2e-minion-group-5kqh has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-94k5n kube-proxy-bootstrap-e2e-minion-group-5kqh metadata-proxy-v0.1-5d8kv volume-snapshot-controller-0] Jan 28 23:07:35.918: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-94k5n kube-proxy-bootstrap-e2e-minion-group-5kqh metadata-proxy-v0.1-5d8kv volume-snapshot-controller-0] Jan 28 23:07:35.918: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-cm88n" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:07:35.918: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:07:35.918: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-94k5n" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:07:35.918: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-5kqh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:07:35.918: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5d8kv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:07:35.963: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z2p7": Phase="Running", Reason="", readiness=true. Elapsed: 45.98674ms Jan 28 23:07:35.963: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z2p7" satisfied condition "running and ready, or succeeded" Jan 28 23:07:35.963: INFO: Pod "metadata-proxy-v0.1-lw5t2": Phase="Running", Reason="", readiness=true. Elapsed: 46.040151ms Jan 28 23:07:35.963: INFO: Pod "metadata-proxy-v0.1-lw5t2" satisfied condition "running and ready, or succeeded" Jan 28 23:07:35.963: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-z2p7 metadata-proxy-v0.1-lw5t2] Jan 28 23:07:35.963: INFO: Getting external IP address for bootstrap-e2e-minion-group-z2p7 Jan 28 23:07:35.963: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-z2p7(34.168.4.157:22) Jan 28 23:07:35.965: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 46.658321ms Jan 28 23:07:35.965: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:35.966: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 47.821983ms Jan 28 23:07:35.966: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:35.966: INFO: Pod "metadata-proxy-v0.1-5d8kv": Phase="Running", Reason="", readiness=true. Elapsed: 47.734951ms Jan 28 23:07:35.966: INFO: Pod "metadata-proxy-v0.1-5d8kv" satisfied condition "running and ready, or succeeded" Jan 28 23:07:35.966: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-v2xx": Phase="Running", Reason="", readiness=true. Elapsed: 48.086978ms Jan 28 23:07:35.966: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-v2xx" satisfied condition "running and ready, or succeeded" Jan 28 23:07:35.967: INFO: Pod "metadata-proxy-v0.1-cm88n": Phase="Running", Reason="", readiness=true. Elapsed: 49.058333ms Jan 28 23:07:35.967: INFO: Pod "metadata-proxy-v0.1-cm88n" satisfied condition "running and ready, or succeeded" Jan 28 23:07:35.967: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-cm88n kube-proxy-bootstrap-e2e-minion-group-v2xx] Jan 28 23:07:35.967: INFO: Getting external IP address for bootstrap-e2e-minion-group-v2xx Jan 28 23:07:35.967: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-v2xx(34.145.43.141:22) Jan 28 23:07:35.967: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=true. Elapsed: 48.980488ms Jan 28 23:07:35.967: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh" satisfied condition "running and ready, or succeeded" Jan 28 23:07:36.501: INFO: ssh prow@34.145.43.141:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 28 23:07:36.502: INFO: ssh prow@34.145.43.141:22: stdout: "" Jan 28 23:07:36.502: INFO: ssh prow@34.145.43.141:22: stderr: "" Jan 28 23:07:36.502: INFO: ssh prow@34.145.43.141:22: exit code: 0 Jan 28 23:07:36.502: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-v2xx condition Ready to be false Jan 28 23:07:36.510: INFO: ssh prow@34.168.4.157:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 28 23:07:36.510: INFO: ssh prow@34.168.4.157:22: stdout: "" Jan 28 23:07:36.510: INFO: ssh prow@34.168.4.157:22: stderr: "" Jan 28 23:07:36.510: INFO: ssh prow@34.168.4.157:22: exit code: 0 Jan 28 23:07:36.510: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-z2p7 condition Ready to be false Jan 28 23:07:36.544: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:36.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:38.007: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2.088869316s Jan 28 23:07:38.007: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:38.008: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.090411844s Jan 28 23:07:38.008: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:38.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:38.595: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:40.007: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4.089217608s Jan 28 23:07:40.007: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:40.009: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.090917967s Jan 28 23:07:40.009: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:40.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:40.653: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:42.008: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 6.089683068s Jan 28 23:07:42.008: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:42.009: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.090945665s Jan 28 23:07:42.009: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:42.694: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:42.697: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:44.009: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 8.090484244s Jan 28 23:07:44.009: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:44.010: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.091817839s Jan 28 23:07:44.010: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:44.737: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:44.739: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:46.018: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 10.100149258s Jan 28 23:07:46.018: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:46.019: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.101518827s Jan 28 23:07:46.019: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:46.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:46.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:48.007: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 12.089217067s Jan 28 23:07:48.007: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:48.009: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.090672409s Jan 28 23:07:48.009: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:48.824: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:48.826: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:50.007: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 14.088970737s Jan 28 23:07:50.007: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:50.008: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.090376816s Jan 28 23:07:50.008: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:50.867: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:50.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:52.008: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 16.09043568s Jan 28 23:07:52.009: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:52.010: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 16.091855906s Jan 28 23:07:52.010: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 23:07:52.910: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:52.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:54.009: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 18.091059007s Jan 28 23:07:54.009: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:54.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:54.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:56.018: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 20.100010141s Jan 28 23:07:56.018: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:56.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:57.001: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:58.007: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 22.088700807s Jan 28 23:07:58.007: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:59.041: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:59.044: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:08:00.007: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 24.088788074s Jan 28 23:08:00.007: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:08:01.081: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:01.084: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:02.004: INFO: Encountered non-retryable error while getting pod kube-system/kube-dns-autoscaler-5f6455f985-94k5n: Get "https://34.83.136.180/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-94k5n": dial tcp 34.83.136.180:443: connect: connection refused Jan 28 23:08:02.005: INFO: Pod kube-dns-autoscaler-5f6455f985-94k5n failed to be running and ready, or succeeded. Jan 28 23:08:02.005: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-dns-autoscaler-5f6455f985-94k5n kube-proxy-bootstrap-e2e-minion-group-5kqh metadata-proxy-v0.1-5d8kv volume-snapshot-controller-0] Jan 28 23:08:02.005: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-94k5n: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:01:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:02:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:40 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP: PodIPs:[] StartTime:2023-01-28 22:54:40 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-28 23:00:44 +0000 UTC,FinishedAt:2023-01-28 23:01:16 +0000 UTC,ContainerID:containerd://6610b36ea376572aa9045552b2a3a3cde3a29846696ca9838eb92776847eed45,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:5 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://6610b36ea376572aa9045552b2a3a3cde3a29846696ca9838eb92776847eed45 Started:0xc0037f4857}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 23:08:02.044: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-94k5n/autoscaler, err: Get "https://34.83.136.180/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-94k5n/log?container=autoscaler&previous=false": dial tcp 34.83.136.180:443: connect: connection refused: Jan 28 23:08:02.044: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-94k5n/autoscaler, err: Get "https://34.83.136.180/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-94k5n/log?container=autoscaler&previous=false": dial tcp 34.83.136.180:443: connect: connection refused: Jan 28 23:08:02.044: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:06:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:06:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:40 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.3.32 PodIPs:[{IP:10.64.3.32}] StartTime:2023-01-28 22:54:40 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 1m20s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(5e06e33a-3aff-4f65-9b6b-f080476a8d59),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 23:04:44 +0000 UTC,FinishedAt:2023-01-28 23:06:15 +0000 UTC,ContainerID:containerd://4b6d34b3db1bef75e3cb8fc28645e993a826af743aaa0b28506d97953ac31c8f,}} Ready:false RestartCount:8 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://4b6d34b3db1bef75e3cb8fc28645e993a826af743aaa0b28506d97953ac31c8f Started:0xc0037f523f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 28 23:08:02.083: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://34.83.136.180/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 34.83.136.180:443: connect: connection refused: Jan 28 23:08:02.083: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://34.83.136.180/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 34.83.136.180:443: connect: connection refused: Jan 28 23:08:03.121: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:03.124: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:05.161: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:05.164: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:07.201: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:07.204: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:09.243: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:09.244: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:11.283: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:11.283: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:13.324: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:13.324: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:15.364: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:15.364: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:17.403: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:17.404: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:19.444: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:19.444: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:21.484: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:21.484: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:23.524: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:23.524: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:25.564: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:25.564: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:27.604: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:27.604: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:29.645: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:29.645: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:31.685: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:31.685: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:39.313: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:08:39.314: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:22.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:22.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:24.954: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:24.954: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:26.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:26.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:29.045: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:29.045: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:31.092: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:31.093: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:33.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:33.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:35.194: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:35.194: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:37.197: INFO: Node bootstrap-e2e-minion-group-z2p7 didn't reach desired Ready condition status (false) within 2m0s Jan 28 23:09:37.197: INFO: Node bootstrap-e2e-minion-group-v2xx didn't reach desired Ready condition status (false) within 2m0s Jan 28 23:09:37.197: INFO: Node bootstrap-e2e-minion-group-5kqh failed reboot test. Jan 28 23:09:37.197: INFO: Node bootstrap-e2e-minion-group-v2xx failed reboot test. Jan 28 23:09:37.197: INFO: Node bootstrap-e2e-minion-group-z2p7 failed reboot test. Jan 28 23:09:37.197: INFO: Executing termination hook on nodes Jan 28 23:09:37.198: INFO: Getting external IP address for bootstrap-e2e-minion-group-5kqh Jan 28 23:09:37.198: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-5kqh(34.168.200.47:22) Jan 28 23:09:37.765: INFO: ssh prow@34.168.200.47:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 28 23:09:37.765: INFO: ssh prow@34.168.200.47:22: stdout: "" Jan 28 23:09:37.765: INFO: ssh prow@34.168.200.47:22: stderr: "cat: /tmp/drop-outbound.log: No such file or directory\n" Jan 28 23:09:37.765: INFO: ssh prow@34.168.200.47:22: exit code: 1 Jan 28 23:09:37.766: INFO: Error while issuing ssh command: failed running "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log": <nil> (exit code 1, stderr cat: /tmp/drop-outbound.log: No such file or directory ) Jan 28 23:09:37.766: INFO: Getting external IP address for bootstrap-e2e-minion-group-v2xx Jan 28 23:09:37.766: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-v2xx(34.145.43.141:22) Jan 28 23:09:53.740: INFO: ssh prow@34.145.43.141:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 28 23:09:53.740: INFO: ssh prow@34.145.43.141:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 23:07:46 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 23:09:53.740: INFO: ssh prow@34.145.43.141:22: stderr: "" Jan 28 23:09:53.740: INFO: ssh prow@34.145.43.141:22: exit code: 0 Jan 28 23:09:53.740: INFO: Getting external IP address for bootstrap-e2e-minion-group-z2p7 Jan 28 23:09:53.740: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-z2p7(34.168.4.157:22) Jan 28 23:09:54.264: INFO: ssh prow@34.168.4.157:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 28 23:09:54.264: INFO: ssh prow@34.168.4.157:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 23:07:46 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 23:09:54.264: INFO: ssh prow@34.168.4.157:22: stderr: "" Jan 28 23:09:54.264: INFO: ssh prow@34.168.4.157:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 23:09:54.265 < Exit [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/28/23 23:09:54.265 (2m18.53s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 23:09:54.265 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 23:09:54.265 Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-gmtb4 to bootstrap-e2e-minion-group-5kqh Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.949886764s (3.949897316s including waiting) Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container coredns Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container coredns Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container coredns Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: Get "http://10.64.3.5:8181/ready": dial tcp 10.64.3.5:8181: connect: connection refused Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container coredns Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container coredns Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: Get "http://10.64.3.19:8181/ready": dial tcp 10.64.3.19:8181: connect: connection refused Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container coredns Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container coredns Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container coredns Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: Get "http://10.64.3.25:8181/ready": dial tcp 10.64.3.25:8181: connect: connection refused Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-gmtb4_kube-system(48008db0-bd58-4d0b-9f0f-1a30f9ae1eed) Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-gmtb4: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: Get "http://10.64.3.28:8181/ready": dial tcp 10.64.3.28:8181: connect: connection refused Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-m4glj to bootstrap-e2e-minion-group-v2xx Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 971.107113ms (971.12427ms including waiting) Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container coredns Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container coredns Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Killing: Stopping container coredns Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Unhealthy: Readiness probe failed: Get "http://10.64.0.3:8181/ready": dial tcp 10.64.0.3:8181: connect: connection refused Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container coredns Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container coredns Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container coredns Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container coredns Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Unhealthy: Liveness probe failed: Get "http://10.64.0.8:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Killing: Stopping container coredns Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-m4glj_kube-system(48c280c5-14bc-438a-86fa-1f138734ffe4) Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f-m4glj: {kubelet bootstrap-e2e-minion-group-v2xx} Unhealthy: Readiness probe failed: Get "http://10.64.0.9:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-gmtb4 Jan 28 23:09:54.321: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-m4glj Jan 28 23:09:54.321: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 23:09:54.321: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 23:09:54.321: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 23:09:54.321: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 23:09:54.321: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 23:09:54.321: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 28 23:09:54.321: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 28 23:09:54.321: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 28 23:09:54.321: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 23:09:54.321: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 23:09:54.321: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 23:09:54.321: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 28 23:09:54.321: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 28 23:09:54.321: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_4ce5d became leader Jan 28 23:09:54.321: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_a401b became leader Jan 28 23:09:54.321: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_85a06 became leader Jan 28 23:09:54.321: INFO: event for konnectivity-agent-btst9: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-btst9 to bootstrap-e2e-minion-group-v2xx Jan 28 23:09:54.321: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 23:09:54.321: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 603.127236ms (603.144594ms including waiting) Jan 28 23:09:54.321: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-btst9: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:09:54.321: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} Killing: Stopping container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-btst9_kube-system(6650f946-87f1-464b-b8b7-08392ca3dbab) Jan 28 23:09:54.321: INFO: event for konnectivity-agent-btst9: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for konnectivity-agent-btst9: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-h2g89 to bootstrap-e2e-minion-group-z2p7 Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 666.826587ms (666.837294ms including waiting) Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-h2g89_kube-system(f9bf502e-a58e-40db-b5b6-dfa14e5b7875) Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "http://10.64.1.11:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:09:54.321: INFO: event for konnectivity-agent-h2g89: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-jk72b to bootstrap-e2e-minion-group-5kqh Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.919998904s (2.92000884s including waiting) Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Liveness probe failed: Get "http://10.64.3.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container konnectivity-agent Jan 28 23:09:54.321: INFO: event for konnectivity-agent-jk72b: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-jk72b_kube-system(eacd1411-5c92-4ce8-bc32-8a79a0a0aac6) Jan 28 23:09:54.321: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-jk72b Jan 28 23:09:54.321: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-h2g89 Jan 28 23:09:54.321: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-btst9 Jan 28 23:09:54.321: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 23:09:54.321: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 23:09:54.321: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 28 23:09:54.321: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 28 23:09:54.321: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 28 23:09:54.321: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 28 23:09:54.321: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 28 23:09:54.321: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 28 23:09:54.321: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 28 23:09:54.321: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 23:09:54.321: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 23:09:54.321: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 28 23:09:54.321: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 23:09:54.321: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:09:54.321: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 28 23:09:54.321: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 28 23:09:54.321: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 28 23:09:54.321: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 28 23:09:54.321: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c7f3864a-79f1-4243-a016-abad9defaf85 became leader Jan 28 23:09:54.321: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_1607b7ec-e6bf-44d1-a209-56dc258333fe became leader Jan 28 23:09:54.321: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_cc0c1448-463a-48d0-91ef-9220541eaa8a became leader Jan 28 23:09:54.321: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_fbb70045-05fa-4f1e-93de-99c62df7bfea became leader Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-94k5n to bootstrap-e2e-minion-group-5kqh Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.596877064s (1.596888989s including waiting) Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container autoscaler Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container autoscaler Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container autoscaler Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-94k5n_kube-system(a31058f2-55a7-4b22-9fb1-c421767f594c) Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container autoscaler Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container autoscaler Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container autoscaler Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-94k5n_kube-system(a31058f2-55a7-4b22-9fb1-c421767f594c) Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985-94k5n: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-94k5n Jan 28 23:09:54.321: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-5kqh: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-5kqh_kube-system(64d3f4571520730431db78be9372bf75) Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Killing: Stopping container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-v2xx_kube-system(bb9deafc2cbae25454444f8cda5500ca) Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-v2xx: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-z2p7_kube-system(e9c46e782bd92592f44f3dd337e30259) Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container kube-proxy Jan 28 23:09:54.321: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z2p7: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-z2p7_kube-system(e9c46e782bd92592f44f3dd337e30259) Jan 28 23:09:54.321: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 23:09:54.321: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 28 23:09:54.321: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 28 23:09:54.321: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 28 23:09:54.321: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 28 23:09:54.321: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_d08dc38d-6be6-4c10-9977-2e55c0f9654d became leader Jan 28 23:09:54.321: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_cd913c2f-e98e-43bb-98bc-df89dce0f7ee became leader Jan 28 23:09:54.321: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_b0c84497-1313-4390-b088-a16ae1e38e6c became leader Jan 28 23:09:54.321: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_78ae129e-0bc0-4959-bc28-a178c74018d1 became leader Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-wjzcg to bootstrap-e2e-minion-group-5kqh Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 2.356180827s (2.356196114s including waiting) Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container default-http-backend Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container default-http-backend Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container default-http-backend Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container default-http-backend Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container default-http-backend Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99-wjzcg: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container default-http-backend Jan 28 23:09:54.321: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-wjzcg Jan 28 23:09:54.321: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 28 23:09:54.321: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 28 23:09:54.321: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 28 23:09:54.321: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 28 23:09:54.321: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 28 23:09:54.321: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 28 23:09:54.321: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-2mtlx: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-2mtlx to bootstrap-e2e-master Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 788.057866ms (788.066097ms including waiting) Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.184224787s (2.184232084s including waiting) Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-2mtlx: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-5d8kv to bootstrap-e2e-minion-group-5kqh Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 754.439073ms (754.451345ms including waiting) Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.98959934s (1.989628324s including waiting) Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-5d8kv: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-cm88n to bootstrap-e2e-minion-group-v2xx Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 779.269471ms (779.280127ms including waiting) Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.914880811s (1.914910128s including waiting) Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Created: Created container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-cm88n: {kubelet bootstrap-e2e-minion-group-v2xx} Started: Started container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-lw5t2 to bootstrap-e2e-minion-group-z2p7 Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 733.778377ms (733.800063ms including waiting) Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.935483668s (1.935498891s including waiting) Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metadata-proxy Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1-lw5t2: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container prometheus-to-sd-exporter Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-cm88n Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-lw5t2 Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-2mtlx Jan 28 23:09:54.321: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-5d8kv Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-x75mm to bootstrap-e2e-minion-group-5kqh Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 4.05701627s (4.057042853s including waiting) Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container metrics-server Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container metrics-server Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.333143952s (1.33319741s including waiting) Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container metrics-server-nanny Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container metrics-server-nanny Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container metrics-server Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-6764bf875c-x75mm: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container metrics-server-nanny Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-x75mm Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-x75mm Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-v2r9c to bootstrap-e2e-minion-group-z2p7 Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.716154798s (1.716184984s including waiting) Jan 28 23:09:54.321: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.026232391s (1.026241999s including waiting) Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server-nanny Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server-nanny Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container metrics-server-nanny Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container metrics-server Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server-nanny Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server-nanny Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "https://10.64.1.5:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Created: Created container metrics-server-nanny Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Started: Started container metrics-server-nanny Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Liveness probe failed: Get "https://10.64.1.7:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.7:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container metrics-server Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Killing: Stopping container metrics-server-nanny Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.7:10250/readyz": dial tcp 10.64.1.7:10250: connect: connection refused Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-v2r9c_kube-system(b8856956-45a3-4c9e-a3fd-2359271a8fba) Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-v2r9c_kube-system(b8856956-45a3-4c9e-a3fd-2359271a8fba) Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9-v2r9c: {kubelet bootstrap-e2e-minion-group-z2p7} Unhealthy: Readiness probe failed: Get "https://10.64.1.10:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-v2r9c Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 28 23:09:54.322: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-5kqh Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.150769479s (2.150800929s including waiting) Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container volume-snapshot-controller Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container volume-snapshot-controller Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container volume-snapshot-controller Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(5e06e33a-3aff-4f65-9b6b-f080476a8d59) Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container volume-snapshot-controller Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container volume-snapshot-controller Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container volume-snapshot-controller Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(5e06e33a-3aff-4f65-9b6b-f080476a8d59) Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Created: Created container volume-snapshot-controller Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Started: Started container volume-snapshot-controller Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} Killing: Stopping container volume-snapshot-controller Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-5kqh} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(5e06e33a-3aff-4f65-9b6b-f080476a8d59) Jan 28 23:09:54.322: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 23:09:54.322 (57ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 23:09:54.322 Jan 28 23:09:54.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 23:09:54.367 (45ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 23:09:54.367 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 23:09:54.367 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 23:09:54.367 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 23:09:54.367 STEP: Collecting events from namespace "reboot-4484". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 23:09:54.367 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/28/23 23:09:54.408 Jan 28 23:09:54.449: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 23:09:54.449: INFO: Jan 28 23:09:54.495: INFO: Logging node info for node bootstrap-e2e-master Jan 28 23:09:54.537: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 4cdfb6e7-727d-421b-a4d7-efbd5562b935 2202 0 2023-01-28 22:54:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 22:54:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-28 22:54:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-28 22:54:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 23:04:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-12/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 22:54:40 +0000 UTC,LastTransitionTime:2023-01-28 22:54:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 23:04:57 +0000 UTC,LastTransitionTime:2023-01-28 22:54:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 23:04:57 +0000 UTC,LastTransitionTime:2023-01-28 22:54:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 23:04:57 +0000 UTC,LastTransitionTime:2023-01-28 22:54:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 23:04:57 +0000 UTC,LastTransitionTime:2023-01-28 22:54:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.136.180,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-12.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-12.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d617d4ca8a44383986c203bfbf0066d1,SystemUUID:d617d4ca-8a44-3839-86c2-03bfbf0066d1,BootID:8dbe1fa7-5a18-43b3-9fa4-081b8c329dab,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 23:09:54.537: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 28 23:09:54.583: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 28 23:09:54.654: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:54.654: INFO: Container kube-controller-manager ready: true, restart count 5 Jan 28 23:09:54.654: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-28 22:53:55 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:54.654: INFO: Container l7-lb-controller ready: false, restart count 5 Jan 28 23:09:54.654: INFO: metadata-proxy-v0.1-2mtlx started at 2023-01-28 22:54:23 +0000 UTC (0+2 container statuses recorded) Jan 28 23:09:54.654: INFO: Container metadata-proxy ready: true, restart count 0 Jan 28 23:09:54.654: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 28 23:09:54.654: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:54.654: INFO: Container konnectivity-server-container ready: true, restart count 2 Jan 28 23:09:54.654: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:54.654: INFO: Container etcd-container ready: true, restart count 3 Jan 28 23:09:54.654: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:54.654: INFO: Container kube-apiserver ready: true, restart count 1 Jan 28 23:09:54.654: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:54.654: INFO: Container kube-scheduler ready: true, restart count 3 Jan 28 23:09:54.654: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-28 22:53:55 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:54.654: INFO: Container kube-addon-manager ready: true, restart count 1 Jan 28 23:09:54.654: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-28 22:53:37 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:54.654: INFO: Container etcd-container ready: true, restart count 3 Jan 28 23:09:54.832: INFO: Latency metrics for node bootstrap-e2e-master Jan 28 23:09:54.832: INFO: Logging node info for node bootstrap-e2e-minion-group-5kqh Jan 28 23:09:54.874: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-5kqh 22141237-0160-4034-9e28-ae02d88cb4ba 2481 0 2023-01-28 22:54:24 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-5kqh kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 22:54:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 23:01:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 23:02:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-28 23:07:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-28 23:07:15 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-12/us-west1-b/bootstrap-e2e-minion-group-5kqh,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 23:07:10 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 23:07:10 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 23:07:10 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 23:07:10 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 23:07:10 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 23:07:10 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 23:07:10 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 22:54:40 +0000 UTC,LastTransitionTime:2023-01-28 22:54:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 23:07:15 +0000 UTC,LastTransitionTime:2023-01-28 23:02:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 23:07:15 +0000 UTC,LastTransitionTime:2023-01-28 23:02:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 23:07:15 +0000 UTC,LastTransitionTime:2023-01-28 23:02:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 23:07:15 +0000 UTC,LastTransitionTime:2023-01-28 23:02:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.200.47,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-5kqh.c.k8s-boskos-gce-project-12.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-5kqh.c.k8s-boskos-gce-project-12.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9b0656f82e87ede044202cdcb6f45e0d,SystemUUID:9b0656f8-2e87-ede0-4420-2cdcb6f45e0d,BootID:54027c00-e043-4e49-be7c-dd07f5d46486,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 23:09:54.874: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-5kqh Jan 28 23:09:54.921: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-5kqh Jan 28 23:09:54.985: INFO: coredns-6846b5b5f-gmtb4 started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:54.985: INFO: Container coredns ready: true, restart count 5 Jan 28 23:09:54.985: INFO: metadata-proxy-v0.1-5d8kv started at 2023-01-28 22:54:25 +0000 UTC (0+2 container statuses recorded) Jan 28 23:09:54.985: INFO: Container metadata-proxy ready: true, restart count 2 Jan 28 23:09:54.985: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 28 23:09:54.985: INFO: konnectivity-agent-jk72b started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:54.985: INFO: Container konnectivity-agent ready: true, restart count 4 Jan 28 23:09:54.985: INFO: kube-proxy-bootstrap-e2e-minion-group-5kqh started at 2023-01-28 22:54:24 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:54.985: INFO: Container kube-proxy ready: true, restart count 4 Jan 28 23:09:54.985: INFO: l7-default-backend-8549d69d99-wjzcg started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:54.985: INFO: Container default-http-backend ready: true, restart count 2 Jan 28 23:09:54.985: INFO: kube-dns-autoscaler-5f6455f985-94k5n started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:54.985: INFO: Container autoscaler ready: false, restart count 5 Jan 28 23:09:54.985: INFO: volume-snapshot-controller-0 started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:54.985: INFO: Container volume-snapshot-controller ready: false, restart count 9 Jan 28 23:09:55.153: INFO: Latency metrics for node bootstrap-e2e-minion-group-5kqh Jan 28 23:09:55.153: INFO: Logging node info for node bootstrap-e2e-minion-group-v2xx Jan 28 23:09:55.195: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-v2xx 8d3b42ad-eaa3-4569-8913-6869a3343290 2477 0 2023-01-28 22:54:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-v2xx kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 22:54:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 23:01:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 23:02:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-28 23:07:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-28 23:07:14 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-12/us-west1-b/bootstrap-e2e-minion-group-v2xx,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 23:07:09 +0000 UTC,LastTransitionTime:2023-01-28 23:02:07 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 23:07:09 +0000 UTC,LastTransitionTime:2023-01-28 23:02:07 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 23:07:09 +0000 UTC,LastTransitionTime:2023-01-28 23:02:07 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 23:07:09 +0000 UTC,LastTransitionTime:2023-01-28 23:02:07 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 23:07:09 +0000 UTC,LastTransitionTime:2023-01-28 23:02:07 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 23:07:09 +0000 UTC,LastTransitionTime:2023-01-28 23:02:07 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 23:07:09 +0000 UTC,LastTransitionTime:2023-01-28 23:02:07 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 22:54:40 +0000 UTC,LastTransitionTime:2023-01-28 22:54:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 23:07:14 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 23:07:14 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 23:07:14 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 23:07:14 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.145.43.141,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-v2xx.c.k8s-boskos-gce-project-12.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-v2xx.c.k8s-boskos-gce-project-12.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ee1a644c4c428a8b9c148cb020481c61,SystemUUID:ee1a644c-4c42-8a8b-9c14-8cb020481c61,BootID:0834ca37-4b8c-4e2c-9477-51562f8a1006,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 23:09:55.195: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-v2xx Jan 28 23:09:55.246: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-v2xx Jan 28 23:09:55.307: INFO: kube-proxy-bootstrap-e2e-minion-group-v2xx started at 2023-01-28 22:54:21 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:55.307: INFO: Container kube-proxy ready: true, restart count 4 Jan 28 23:09:55.307: INFO: metadata-proxy-v0.1-cm88n started at 2023-01-28 22:54:22 +0000 UTC (0+2 container statuses recorded) Jan 28 23:09:55.307: INFO: Container metadata-proxy ready: true, restart count 2 Jan 28 23:09:55.307: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 28 23:09:55.307: INFO: konnectivity-agent-btst9 started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:55.307: INFO: Container konnectivity-agent ready: false, restart count 2 Jan 28 23:09:55.307: INFO: coredns-6846b5b5f-m4glj started at 2023-01-28 22:54:45 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:55.307: INFO: Container coredns ready: true, restart count 4 Jan 28 23:09:55.475: INFO: Latency metrics for node bootstrap-e2e-minion-group-v2xx Jan 28 23:09:55.475: INFO: Logging node info for node bootstrap-e2e-minion-group-z2p7 Jan 28 23:09:55.517: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-z2p7 bfe9b4b0-043a-4f06-a0c0-cc180155d59d 2483 0 2023-01-28 22:54:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-z2p7 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 22:54:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 23:01:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 23:02:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-28 23:07:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-28 23:07:15 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-12/us-west1-b/bootstrap-e2e-minion-group-z2p7,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 23:07:08 +0000 UTC,LastTransitionTime:2023-01-28 23:02:07 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 23:07:08 +0000 UTC,LastTransitionTime:2023-01-28 23:02:07 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 23:07:08 +0000 UTC,LastTransitionTime:2023-01-28 23:02:07 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 23:07:08 +0000 UTC,LastTransitionTime:2023-01-28 23:02:07 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 23:07:08 +0000 UTC,LastTransitionTime:2023-01-28 23:02:07 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 23:07:08 +0000 UTC,LastTransitionTime:2023-01-28 23:02:07 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 23:07:08 +0000 UTC,LastTransitionTime:2023-01-28 23:02:07 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 22:54:40 +0000 UTC,LastTransitionTime:2023-01-28 22:54:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 23:07:15 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 23:07:15 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 23:07:15 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 23:07:15 +0000 UTC,LastTransitionTime:2023-01-28 23:02:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.4.157,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-z2p7.c.k8s-boskos-gce-project-12.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-z2p7.c.k8s-boskos-gce-project-12.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:acd066cc5d3d0c26751e787888eec6d0,SystemUUID:acd066cc-5d3d-0c26-751e-787888eec6d0,BootID:d66ac406-e519-4683-9f4f-e7d9464e92ca,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 23:09:55.517: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-z2p7 Jan 28 23:09:55.563: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-z2p7 Jan 28 23:09:55.625: INFO: kube-proxy-bootstrap-e2e-minion-group-z2p7 started at 2023-01-28 22:54:22 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:55.626: INFO: Container kube-proxy ready: true, restart count 6 Jan 28 23:09:55.626: INFO: metadata-proxy-v0.1-lw5t2 started at 2023-01-28 22:54:23 +0000 UTC (0+2 container statuses recorded) Jan 28 23:09:55.626: INFO: Container metadata-proxy ready: true, restart count 2 Jan 28 23:09:55.626: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 28 23:09:55.626: INFO: konnectivity-agent-h2g89 started at 2023-01-28 22:54:40 +0000 UTC (0+1 container statuses recorded) Jan 28 23:09:55.626: INFO: Container konnectivity-agent ready: false, restart count 5 Jan 28 23:09:55.626: INFO: metrics-server-v0.5.2-867b8754b9-v2r9c started at 2023-01-28 22:54:59 +0000 UTC (0+2 container statuses recorded) Jan 28 23:09:55.626: INFO: Container metrics-server ready: false, restart count 6 Jan 28 23:09:55.626: INFO: Container metrics-server-nanny ready: false, restart count 5 Jan 28 23:09:55.787: INFO: Latency metrics for node bootstrap-e2e-minion-group-z2p7 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 23:09:55.787 (1.42s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 23:09:55.787 (1.42s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 23:09:55.787 STEP: Destroying namespace "reboot-4484" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 23:09:55.787 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 23:09:55.831 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 23:09:55.831 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 23:09:55.831 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 23:09:54.265
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 23:07:35.444 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 23:07:35.445 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 23:07:35.445 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 23:07:35.445 Jan 28 23:07:35.445: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 23:07:35.447 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 23:07:35.573 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 23:07:35.654 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 23:07:35.734 (290ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 23:07:35.734 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 23:07:35.735 (0s) > Enter [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/28/23 23:07:35.735 Jan 28 23:07:35.829: INFO: Getting bootstrap-e2e-minion-group-v2xx Jan 28 23:07:35.829: INFO: Getting bootstrap-e2e-minion-group-z2p7 Jan 28 23:07:35.829: INFO: Getting bootstrap-e2e-minion-group-5kqh Jan 28 23:07:35.873: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-v2xx condition Ready to be true Jan 28 23:07:35.873: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-5kqh condition Ready to be true Jan 28 23:07:35.873: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-z2p7 condition Ready to be true Jan 28 23:07:35.917: INFO: Node bootstrap-e2e-minion-group-z2p7 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-z2p7 metadata-proxy-v0.1-lw5t2] Jan 28 23:07:35.917: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-z2p7 metadata-proxy-v0.1-lw5t2] Jan 28 23:07:35.917: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-lw5t2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:07:35.917: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-z2p7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:07:35.918: INFO: Node bootstrap-e2e-minion-group-v2xx has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-cm88n kube-proxy-bootstrap-e2e-minion-group-v2xx] Jan 28 23:07:35.918: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-cm88n kube-proxy-bootstrap-e2e-minion-group-v2xx] Jan 28 23:07:35.918: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-v2xx" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:07:35.918: INFO: Node bootstrap-e2e-minion-group-5kqh has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-94k5n kube-proxy-bootstrap-e2e-minion-group-5kqh metadata-proxy-v0.1-5d8kv volume-snapshot-controller-0] Jan 28 23:07:35.918: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-94k5n kube-proxy-bootstrap-e2e-minion-group-5kqh metadata-proxy-v0.1-5d8kv volume-snapshot-controller-0] Jan 28 23:07:35.918: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-cm88n" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:07:35.918: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:07:35.918: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-94k5n" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:07:35.918: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-5kqh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:07:35.918: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5d8kv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 23:07:35.963: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z2p7": Phase="Running", Reason="", readiness=true. Elapsed: 45.98674ms Jan 28 23:07:35.963: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z2p7" satisfied condition "running and ready, or succeeded" Jan 28 23:07:35.963: INFO: Pod "metadata-proxy-v0.1-lw5t2": Phase="Running", Reason="", readiness=true. Elapsed: 46.040151ms Jan 28 23:07:35.963: INFO: Pod "metadata-proxy-v0.1-lw5t2" satisfied condition "running and ready, or succeeded" Jan 28 23:07:35.963: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-z2p7 metadata-proxy-v0.1-lw5t2] Jan 28 23:07:35.963: INFO: Getting external IP address for bootstrap-e2e-minion-group-z2p7 Jan 28 23:07:35.963: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-z2p7(34.168.4.157:22) Jan 28 23:07:35.965: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 46.658321ms Jan 28 23:07:35.965: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:35.966: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 47.821983ms Jan 28 23:07:35.966: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:35.966: INFO: Pod "metadata-proxy-v0.1-5d8kv": Phase="Running", Reason="", readiness=true. Elapsed: 47.734951ms Jan 28 23:07:35.966: INFO: Pod "metadata-proxy-v0.1-5d8kv" satisfied condition "running and ready, or succeeded" Jan 28 23:07:35.966: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-v2xx": Phase="Running", Reason="", readiness=true. Elapsed: 48.086978ms Jan 28 23:07:35.966: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-v2xx" satisfied condition "running and ready, or succeeded" Jan 28 23:07:35.967: INFO: Pod "metadata-proxy-v0.1-cm88n": Phase="Running", Reason="", readiness=true. Elapsed: 49.058333ms Jan 28 23:07:35.967: INFO: Pod "metadata-proxy-v0.1-cm88n" satisfied condition "running and ready, or succeeded" Jan 28 23:07:35.967: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-cm88n kube-proxy-bootstrap-e2e-minion-group-v2xx] Jan 28 23:07:35.967: INFO: Getting external IP address for bootstrap-e2e-minion-group-v2xx Jan 28 23:07:35.967: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-v2xx(34.145.43.141:22) Jan 28 23:07:35.967: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh": Phase="Running", Reason="", readiness=true. Elapsed: 48.980488ms Jan 28 23:07:35.967: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-5kqh" satisfied condition "running and ready, or succeeded" Jan 28 23:07:36.501: INFO: ssh prow@34.145.43.141:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 28 23:07:36.502: INFO: ssh prow@34.145.43.141:22: stdout: "" Jan 28 23:07:36.502: INFO: ssh prow@34.145.43.141:22: stderr: "" Jan 28 23:07:36.502: INFO: ssh prow@34.145.43.141:22: exit code: 0 Jan 28 23:07:36.502: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-v2xx condition Ready to be false Jan 28 23:07:36.510: INFO: ssh prow@34.168.4.157:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 28 23:07:36.510: INFO: ssh prow@34.168.4.157:22: stdout: "" Jan 28 23:07:36.510: INFO: ssh prow@34.168.4.157:22: stderr: "" Jan 28 23:07:36.510: INFO: ssh prow@34.168.4.157:22: exit code: 0 Jan 28 23:07:36.510: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-z2p7 condition Ready to be false Jan 28 23:07:36.544: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:36.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:38.007: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 2.088869316s Jan 28 23:07:38.007: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:38.008: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.090411844s Jan 28 23:07:38.008: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:38.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:38.595: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:40.007: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 4.089217608s Jan 28 23:07:40.007: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:40.009: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.090917967s Jan 28 23:07:40.009: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:40.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:40.653: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:42.008: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 6.089683068s Jan 28 23:07:42.008: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:42.009: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.090945665s Jan 28 23:07:42.009: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:42.694: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:42.697: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:44.009: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 8.090484244s Jan 28 23:07:44.009: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:44.010: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.091817839s Jan 28 23:07:44.010: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:44.737: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:44.739: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:46.018: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 10.100149258s Jan 28 23:07:46.018: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:46.019: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.101518827s Jan 28 23:07:46.019: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:46.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:46.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:48.007: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 12.089217067s Jan 28 23:07:48.007: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:48.009: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.090672409s Jan 28 23:07:48.009: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:48.824: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:48.826: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:50.007: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 14.088970737s Jan 28 23:07:50.007: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:50.008: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.090376816s Jan 28 23:07:50.008: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:06:16 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:50.867: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:50.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:52.008: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 16.09043568s Jan 28 23:07:52.009: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:52.010: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 16.091855906s Jan 28 23:07:52.010: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 23:07:52.910: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:52.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:54.009: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 18.091059007s Jan 28 23:07:54.009: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:54.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:54.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:56.018: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 20.100010141s Jan 28 23:07:56.018: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:56.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:57.001: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:58.007: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 22.088700807s Jan 28 23:07:58.007: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:07:59.041: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:07:59.044: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:08:00.007: INFO: Pod "kube-dns-autoscaler-5f6455f985-94k5n": Phase="Running", Reason="", readiness=false. Elapsed: 24.088788074s Jan 28 23:08:00.007: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-94k5n' on 'bootstrap-e2e-minion-group-5kqh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:01:31 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 23:02:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:54:40 +0000 UTC }] Jan 28 23:08:01.081: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:01.084: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:02.004: INFO: Encountered non-retryable error while getting pod kube-system/kube-dns-autoscaler-5f6455f985-94k5n: Get "https://34.83.136.180/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-94k5n": dial tcp 34.83.136.180:443: connect: connection refused Jan 28 23:08:02.005: INFO: Pod kube-dns-autoscaler-5f6455f985-94k5n failed to be running and ready, or succeeded. Jan 28 23:08:02.005: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-dns-autoscaler-5f6455f985-94k5n kube-proxy-bootstrap-e2e-minion-group-5kqh metadata-proxy-v0.1-5d8kv volume-snapshot-controller-0] Jan 28 23:08:02.005: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-94k5n: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:01:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:02:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:40 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP: PodIPs:[] StartTime:2023-01-28 22:54:40 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-28 23:00:44 +0000 UTC,FinishedAt:2023-01-28 23:01:16 +0000 UTC,ContainerID:containerd://6610b36ea376572aa9045552b2a3a3cde3a29846696ca9838eb92776847eed45,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:5 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://6610b36ea376572aa9045552b2a3a3cde3a29846696ca9838eb92776847eed45 Started:0xc0037f4857}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 23:08:02.044: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-94k5n/autoscaler, err: Get "https://34.83.136.180/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-94k5n/log?container=autoscaler&previous=false": dial tcp 34.83.136.180:443: connect: connection refused: Jan 28 23:08:02.044: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-94k5n/autoscaler, err: Get "https://34.83.136.180/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-94k5n/log?container=autoscaler&previous=false": dial tcp 34.83.136.180:443: connect: connection refused: Jan 28 23:08:02.044: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:06:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 23:06:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:54:40 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.3.32 PodIPs:[{IP:10.64.3.32}] StartTime:2023-01-28 22:54:40 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 1m20s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(5e06e33a-3aff-4f65-9b6b-f080476a8d59),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 23:04:44 +0000 UTC,FinishedAt:2023-01-28 23:06:15 +0000 UTC,ContainerID:containerd://4b6d34b3db1bef75e3cb8fc28645e993a826af743aaa0b28506d97953ac31c8f,}} Ready:false RestartCount:8 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://4b6d34b3db1bef75e3cb8fc28645e993a826af743aaa0b28506d97953ac31c8f Started:0xc0037f523f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 28 23:08:02.083: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://34.83.136.180/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 34.83.136.180:443: connect: connection refused: Jan 28 23:08:02.083: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://34.83.136.180/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 34.83.136.180:443: connect: connection refused: Jan 28 23:08:03.121: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:03.124: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:05.161: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:05.164: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:07.201: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:07.204: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:09.243: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:09.244: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:11.283: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:11.283: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:13.324: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:13.324: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:15.364: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:15.364: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:17.403: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:17.404: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:19.444: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:19.444: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:21.484: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:21.484: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:23.524: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:23.524: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:25.564: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:25.564: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:27.604: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:27.604: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:29.645: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:29.645: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:31.685: INFO: Couldn't get node bootstrap-e2e-minion-group-v2xx Jan 28 23:08:31.685: INFO: Couldn't get node bootstrap-e2e-minion-group-z2p7 Jan 28 23:08:39.313: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:08:39.314: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:22.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:22.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:24.954: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:24.954: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:26.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:26.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:29.045: INFO: Condition Ready of node bootstrap-e2e-minion-group-v2xx is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 23:09:29.045: INFO: Condition Ready of node bootstrap-e2e-minion-group-z2p7 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 2