go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 18:12:20.563from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 18:06:03.654 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 18:06:03.654 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 18:06:03.654 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 18:06:03.655 Jan 29 18:06:03.655: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 18:06:03.656 Jan 29 18:06:03.696: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:06:05.736: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 18:07:18.263 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 18:07:18.344 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 18:07:18.424 (1m14.77s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 18:07:18.424 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 18:07:18.424 (0s) > Enter [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/29/23 18:07:18.424 Jan 29 18:07:18.607: INFO: Getting bootstrap-e2e-minion-group-s96g Jan 29 18:07:18.607: INFO: Getting bootstrap-e2e-minion-group-dsnz Jan 29 18:07:18.608: INFO: Getting bootstrap-e2e-minion-group-9h8t Jan 29 18:07:18.651: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-s96g condition Ready to be true Jan 29 18:07:18.651: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-dsnz condition Ready to be true Jan 29 18:07:18.668: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-9h8t condition Ready to be true Jan 29 18:07:18.695: INFO: Node bootstrap-e2e-minion-group-dsnz has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:07:18.695: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:07:18.695: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8v287" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:07:18.695: INFO: Node bootstrap-e2e-minion-group-s96g has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:07:18.695: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:07:18.695: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-4xsdn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:07:18.695: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-dsnz" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:07:18.695: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-s96g" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:07:18.709: INFO: Node bootstrap-e2e-minion-group-9h8t has 4 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-9h8t metadata-proxy-v0.1-dnsxr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-9smhj] Jan 29 18:07:18.709: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-9h8t metadata-proxy-v0.1-dnsxr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-9smhj] Jan 29 18:07:18.709: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-9smhj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:07:18.709: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-dnsxr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:07:18.709: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:07:18.709: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-9h8t" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:07:18.738: INFO: Pod "metadata-proxy-v0.1-8v287": Phase="Running", Reason="", readiness=true. Elapsed: 43.137425ms Jan 29 18:07:18.738: INFO: Pod "metadata-proxy-v0.1-8v287" satisfied condition "running and ready, or succeeded" Jan 29 18:07:18.740: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=true. Elapsed: 45.18281ms Jan 29 18:07:18.740: INFO: Pod "metadata-proxy-v0.1-4xsdn" satisfied condition "running and ready, or succeeded" Jan 29 18:07:18.740: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dsnz": Phase="Running", Reason="", readiness=true. Elapsed: 45.192454ms Jan 29 18:07:18.740: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dsnz" satisfied condition "running and ready, or succeeded" Jan 29 18:07:18.740: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:07:18.740: INFO: Getting external IP address for bootstrap-e2e-minion-group-dsnz Jan 29 18:07:18.740: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-dsnz(34.168.175.64:22) Jan 29 18:07:18.741: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s96g": Phase="Running", Reason="", readiness=true. Elapsed: 45.7081ms Jan 29 18:07:18.741: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s96g" satisfied condition "running and ready, or succeeded" Jan 29 18:07:18.741: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:07:18.741: INFO: Getting external IP address for bootstrap-e2e-minion-group-s96g Jan 29 18:07:18.741: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-s96g(35.233.157.204:22) Jan 29 18:07:18.754: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 44.364125ms Jan 29 18:07:18.754: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:18.755: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 45.218222ms Jan 29 18:07:18.755: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 18:07:18.755: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9h8t": Phase="Running", Reason="", readiness=true. Elapsed: 45.225787ms Jan 29 18:07:18.755: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9h8t" satisfied condition "running and ready, or succeeded" Jan 29 18:07:18.755: INFO: Pod "metadata-proxy-v0.1-dnsxr": Phase="Running", Reason="", readiness=true. Elapsed: 45.308215ms Jan 29 18:07:18.755: INFO: Pod "metadata-proxy-v0.1-dnsxr" satisfied condition "running and ready, or succeeded" Jan 29 18:07:19.271: INFO: ssh prow@35.233.157.204:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 18:07:19.271: INFO: ssh prow@35.233.157.204:22: stdout: "" Jan 29 18:07:19.271: INFO: ssh prow@35.233.157.204:22: stderr: "" Jan 29 18:07:19.271: INFO: ssh prow@35.233.157.204:22: exit code: 0 Jan 29 18:07:19.271: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-s96g condition Ready to be false Jan 29 18:07:19.282: INFO: ssh prow@34.168.175.64:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 18:07:19.282: INFO: ssh prow@34.168.175.64:22: stdout: "" Jan 29 18:07:19.282: INFO: ssh prow@34.168.175.64:22: stderr: "" Jan 29 18:07:19.282: INFO: ssh prow@34.168.175.64:22: exit code: 0 Jan 29 18:07:19.282: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-dsnz condition Ready to be false Jan 29 18:07:19.313: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:19.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:20.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2.087116718s Jan 29 18:07:20.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:21.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:21.367: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:22.803: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4.093155406s Jan 29 18:07:22.803: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:23.402: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:23.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:24.801: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 6.092091957s Jan 29 18:07:24.801: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:25.458: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:25.463: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:26.799: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 8.089948815s Jan 29 18:07:26.799: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:27.501: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:27.505: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:28.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 10.086820434s Jan 29 18:07:28.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:29.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:29.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:30.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 12.086559954s Jan 29 18:07:30.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:31.588: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:31.590: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:32.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 14.089127781s Jan 29 18:07:32.799: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:33.631: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:33.634: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:34.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 16.086088841s Jan 29 18:07:34.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:35.673: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:35.676: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:36.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 18.086912788s Jan 29 18:07:36.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:37.716: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:37.720: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:38.801: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 20.091644588s Jan 29 18:07:38.801: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:39.760: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:39.762: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:40.808: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 22.09867204s Jan 29 18:07:40.808: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:41.803: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:41.805: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:42.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 24.086471979s Jan 29 18:07:42.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:43.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:43.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:44.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 26.08808201s Jan 29 18:07:44.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:45.889: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:45.892: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:46.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 28.08674272s Jan 29 18:07:46.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:47.932: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:47.935: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:48.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 30.086614055s Jan 29 18:07:48.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:49.978: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:49.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:50.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 32.086821797s Jan 29 18:07:50.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:52.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:52.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:52.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 34.086717896s Jan 29 18:07:52.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:54.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:54.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:54.803: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 36.093782606s Jan 29 18:07:54.803: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:56.113: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:56.113: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:56.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 38.086720019s Jan 29 18:07:56.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:58.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:58.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:58.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 40.085946806s Jan 29 18:07:58.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:00.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:08:00.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:08:00.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 42.087611921s Jan 29 18:08:00.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:02.314: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:08:02.314: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:08:02.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 44.086524248s Jan 29 18:08:02.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:04.358: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-dsnz condition Ready to be true Jan 29 18:08:04.358: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-s96g condition Ready to be true Jan 29 18:08:04.401: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:04.401: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:04.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 46.087945361s Jan 29 18:08:04.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:06.446: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:06.446: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:06.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 48.087149284s Jan 29 18:08:06.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:08.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:08.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:08.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 50.086433349s Jan 29 18:08:08.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:10.538: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:10.538: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:10.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 52.086294147s Jan 29 18:08:10.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:12.582: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:12.582: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:12.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 54.086657986s Jan 29 18:08:12.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:14.627: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:14.627: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:14.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 56.087867224s Jan 29 18:08:14.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:16.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:16.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:16.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 58.08664219s Jan 29 18:08:16.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:18.713: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:18.713: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:18.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.087400767s Jan 29 18:08:18.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:20.757: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:20.757: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:20.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.086528036s Jan 29 18:08:20.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:22.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.086078968s Jan 29 18:08:22.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:22.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:22.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:24.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.086881648s Jan 29 18:08:24.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:24.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:24.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:26.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.086411012s Jan 29 18:08:26.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:26.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:26.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:28.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.086911756s Jan 29 18:08:28.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:28.939: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:28.939: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:30.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.086752532s Jan 29 18:08:30.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:30.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:30.983: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:32.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.087728671s Jan 29 18:08:32.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:33.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:33.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:34.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.088316463s Jan 29 18:08:34.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:35.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:35.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:36.808: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.098468619s Jan 29 18:08:36.808: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:37.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:37.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:38.801: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.091420466s Jan 29 18:08:38.801: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:39.160: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:39.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:40.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.087521686s Jan 29 18:08:40.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:41.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:41.205: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:42.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.08741375s Jan 29 18:08:42.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:43.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:43.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:44.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.087394103s Jan 29 18:08:44.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:45.288: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:45.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:46.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.086195385s Jan 29 18:08:46.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:47.332: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:47.335: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:48.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.087816409s Jan 29 18:08:48.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:49.375: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:49.377: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:50.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.086344169s Jan 29 18:08:50.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:51.417: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:51.419: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:52.812: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.102694432s Jan 29 18:08:52.812: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:53.460: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:53.462: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:54.799: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.089550858s Jan 29 18:08:54.799: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:55.505: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:55.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:56.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.086789856s Jan 29 18:08:56.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:57.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:57.550: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:58.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.086576893s Jan 29 18:08:58.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:59.591: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:59.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:00.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.086569016s Jan 29 18:09:00.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:01.634: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:01.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:02.813: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.103640186s Jan 29 18:09:02.813: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:03.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:03.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:04.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.088107231s Jan 29 18:09:04.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:05.721: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:05.723: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:06.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.086329509s Jan 29 18:09:06.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:07.763: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:07.766: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:08.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.086615882s Jan 29 18:09:08.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:09.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:09.810: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:10.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.086891104s Jan 29 18:09:10.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:11.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:11.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:12.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.086684924s Jan 29 18:09:12.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:13.893: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:13.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:14.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.088284932s Jan 29 18:09:14.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:15.935: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:15.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:16.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.086429957s Jan 29 18:09:16.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:17.978: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:17.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:18.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.08684187s Jan 29 18:09:18.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:20.020: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:20.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:20.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.087122718s Jan 29 18:09:20.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:22.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:22.065: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:22.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.086313185s Jan 29 18:09:22.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:24.106: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:24.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:24.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.087412924s Jan 29 18:09:24.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:26.149: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:26.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:26.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.086153711s Jan 29 18:09:26.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:28.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:28.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:28.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.086751251s Jan 29 18:09:28.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:30.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:30.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:30.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.086282555s Jan 29 18:09:30.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:32.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:32.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:32.799: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.090092251s Jan 29 18:09:32.800: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:34.325: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:09:34.325: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-4xsdn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:09:34.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:34.326: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-s96g" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:09:34.370: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s96g": Phase="Running", Reason="", readiness=true. Elapsed: 44.496798ms Jan 29 18:09:34.370: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s96g" satisfied condition "running and ready, or succeeded" Jan 29 18:09:34.370: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=true. Elapsed: 44.869369ms Jan 29 18:09:34.370: INFO: Pod "metadata-proxy-v0.1-4xsdn" satisfied condition "running and ready, or succeeded" Jan 29 18:09:34.370: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:09:34.370: INFO: Reboot successful on node bootstrap-e2e-minion-group-s96g Jan 29 18:09:34.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.086699223s Jan 29 18:09:34.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:36.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:36.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.087193432s Jan 29 18:09:36.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:38.414: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:09:38.414: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8v287" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:09:38.414: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-dsnz" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:09:38.457: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dsnz": Phase="Running", Reason="", readiness=true. Elapsed: 43.23343ms Jan 29 18:09:38.457: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dsnz" satisfied condition "running and ready, or succeeded" Jan 29 18:09:38.457: INFO: Pod "metadata-proxy-v0.1-8v287": Phase="Running", Reason="", readiness=true. Elapsed: 43.470317ms Jan 29 18:09:38.457: INFO: Pod "metadata-proxy-v0.1-8v287" satisfied condition "running and ready, or succeeded" Jan 29 18:09:38.457: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:09:38.457: INFO: Reboot successful on node bootstrap-e2e-minion-group-dsnz Jan 29 18:09:38.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.085953175s Jan 29 18:09:38.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:40.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.086563057s Jan 29 18:09:40.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:42.800: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.090246739s Jan 29 18:09:42.800: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:44.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.08632516s Jan 29 18:09:44.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:46.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.086327831s Jan 29 18:09:46.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:48.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.086312886s Jan 29 18:09:48.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:50.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.08793912s Jan 29 18:09:50.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:52.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.086902686s Jan 29 18:09:52.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:54.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.08828119s Jan 29 18:09:54.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:56.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.088350779s Jan 29 18:09:56.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:58.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.086725784s Jan 29 18:09:58.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:00.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.08655078s Jan 29 18:10:00.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:02.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.087076826s Jan 29 18:10:02.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:04.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.086715306s Jan 29 18:10:04.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:06.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.087630051s Jan 29 18:10:06.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:08.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m50.0868895s Jan 29 18:10:08.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:10.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m52.086199848s Jan 29 18:10:10.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:12.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m54.087784763s Jan 29 18:10:12.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:14.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m56.08606828s Jan 29 18:10:14.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:16.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m58.086144651s Jan 29 18:10:16.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:18.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m0.086726939s Jan 29 18:10:18.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:20.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m2.086404477s Jan 29 18:10:20.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:22.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m4.086737553s Jan 29 18:10:22.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:24.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m6.086314332s Jan 29 18:10:24.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:26.846: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m8.136475082s Jan 29 18:10:26.846: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:28.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m10.087942928s Jan 29 18:10:28.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:30.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.087292635s Jan 29 18:10:30.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:32.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.087812344s Jan 29 18:10:32.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:34.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.086278027s Jan 29 18:10:34.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:36.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.087083035s Jan 29 18:10:36.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:38.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.08887743s Jan 29 18:10:38.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:40.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.086881243s Jan 29 18:10:40.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:42.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.086290566s Jan 29 18:10:42.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:44.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.08793661s Jan 29 18:10:44.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:46.800: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.091089169s Jan 29 18:10:46.801: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:48.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.085769978s Jan 29 18:10:48.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:50.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.086167971s Jan 29 18:10:50.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:52.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.08790209s Jan 29 18:10:52.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:54.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.087333754s Jan 29 18:10:54.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:56.804: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.094914442s Jan 29 18:10:56.804: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:58.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.086771537s Jan 29 18:10:58.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:00.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.086424576s Jan 29 18:11:00.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:02.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.086564498s Jan 29 18:11:02.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:04.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.087636951s Jan 29 18:11:04.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:06.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.087166064s Jan 29 18:11:06.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:08.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.086449935s Jan 29 18:11:08.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:10.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.086233731s Jan 29 18:11:10.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:12.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.087867598s Jan 29 18:11:12.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:14.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.08903186s Jan 29 18:11:14.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:16.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.085884416s Jan 29 18:11:16.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:18.800: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.090611353s Jan 29 18:11:18.800: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:20.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.086902263s Jan 29 18:11:20.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:22.806: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.096774561s Jan 29 18:11:22.806: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:24.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.08809698s Jan 29 18:11:24.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:26.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.086558222s Jan 29 18:11:26.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:28.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.087562838s Jan 29 18:11:28.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:30.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.087671846s Jan 29 18:11:30.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:32.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.086973983s Jan 29 18:11:32.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:34.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.085989247s Jan 29 18:11:34.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:36.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.086637779s Jan 29 18:11:36.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:38.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.086973984s Jan 29 18:11:38.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:40.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.086526397s Jan 29 18:11:40.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:42.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.088790476s Jan 29 18:11:42.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:44.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.087479242s Jan 29 18:11:44.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:46.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.086766907s Jan 29 18:11:46.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:48.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.086012392s Jan 29 18:11:48.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:50.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.086528782s Jan 29 18:11:50.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:52.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.087626751s Jan 29 18:11:52.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:54.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.085804187s Jan 29 18:11:54.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:56.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.086642164s Jan 29 18:11:56.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:58.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.0861438s Jan 29 18:11:58.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:00.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.086149194s Jan 29 18:12:00.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:02.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.085826852s Jan 29 18:12:02.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:04.804: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.094372684s Jan 29 18:12:04.804: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:06.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.087714086s Jan 29 18:12:06.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:08.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.086175612s Jan 29 18:12:08.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:10.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.086316251s Jan 29 18:12:10.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:12.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.087745106s Jan 29 18:12:12.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:14.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.088326199s Jan 29 18:12:14.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:16.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.087781502s Jan 29 18:12:16.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards (Spec Runtime: 6m14.77s) test/e2e/cloud/gcp/reboot.go:144 In [It] (Node Runtime: 5m0s) test/e2e/cloud/gcp/reboot.go:144 Spec Goroutine goroutine 3060 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc000a8ea68?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f8de01232c0?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f8de01232c0?, 0xc004bd4840}, {0x8147108?, 0xc002018000}, {0xc0001cc820, 0x187}, 0xc004dad3e0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.8({0x7f8de01232c0, 0xc004bd4840}) test/e2e/cloud/gcp/reboot.go:149 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc004bd4840}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 3043 [chan receive, 5 minutes] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x7f8de01232c0?, 0xc004bd4840}, {0x8147108?, 0xc002018000}, {0x76d190b, 0xb}, {0xc001104f40, 0x4, 0x4}, 0x45d964b800, ...) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReadyOrSucceeded(...) test/e2e/framework/pod/resource.go:508 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f8de01232c0, 0xc004bd4840}, {0x8147108, 0xc002018000}, {0x7ffd0e8245ee, 0x3}, {0xc00132c720, 0x1f}, {0xc0001cc820, 0x187}) test/e2e/cloud/gcp/reboot.go:284 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 18:12:18.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.087079846s Jan 29 18:12:18.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:18.838: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.128410921s Jan 29 18:12:18.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:18.838: INFO: Pod kube-dns-autoscaler-5f6455f985-9smhj failed to be running and ready, or succeeded. Jan 29 18:12:18.838: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-9h8t metadata-proxy-v0.1-dnsxr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-9smhj] Jan 29 18:12:18.838: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-9smhj: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 17:57:14 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 18:03:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 18:04:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 17:57:14 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP: PodIPs:[] StartTime:2023-01-29 17:57:14 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 18:02:46 +0000 UTC,FinishedAt:2023-01-29 18:03:27 +0000 UTC,ContainerID:containerd://950ea0c01909be3e17165f748ab6c2d38a95a221cf18aba5f3ab884dd49d543c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:3 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://950ea0c01909be3e17165f748ab6c2d38a95a221cf18aba5f3ab884dd49d543c Started:0xc004910857}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 29 18:12:18.986: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-9smhj/autoscaler: Jan 29 18:12:18.986: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-9smhj/autoscaler: Jan 29 18:12:18.986: INFO: Node bootstrap-e2e-minion-group-9h8t failed reboot test. Jan 29 18:12:18.986: INFO: Executing termination hook on nodes Jan 29 18:12:18.986: INFO: Getting external IP address for bootstrap-e2e-minion-group-9h8t Jan 29 18:12:18.986: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-9h8t(35.247.75.88:22) Jan 29 18:12:19.524: INFO: ssh prow@35.247.75.88:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 18:12:19.524: INFO: ssh prow@35.247.75.88:22: stdout: "" Jan 29 18:12:19.524: INFO: ssh prow@35.247.75.88:22: stderr: "cat: /tmp/drop-outbound.log: No such file or directory\n" Jan 29 18:12:19.524: INFO: ssh prow@35.247.75.88:22: exit code: 1 Jan 29 18:12:19.524: INFO: Error while issuing ssh command: failed running "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log": <nil> (exit code 1, stderr cat: /tmp/drop-outbound.log: No such file or directory ) Jan 29 18:12:19.524: INFO: Getting external IP address for bootstrap-e2e-minion-group-dsnz Jan 29 18:12:19.524: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-dsnz(34.168.175.64:22) Jan 29 18:12:20.051: INFO: ssh prow@34.168.175.64:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 18:12:20.051: INFO: ssh prow@34.168.175.64:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 18:07:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 18:12:20.051: INFO: ssh prow@34.168.175.64:22: stderr: "" Jan 29 18:12:20.051: INFO: ssh prow@34.168.175.64:22: exit code: 0 Jan 29 18:12:20.051: INFO: Getting external IP address for bootstrap-e2e-minion-group-s96g Jan 29 18:12:20.051: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-s96g(35.233.157.204:22) Jan 29 18:12:20.563: INFO: ssh prow@35.233.157.204:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 18:12:20.563: INFO: ssh prow@35.233.157.204:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 18:07:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 18:12:20.563: INFO: ssh prow@35.233.157.204:22: stderr: "" Jan 29 18:12:20.563: INFO: ssh prow@35.233.157.204:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 18:12:20.563 < Exit [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/29/23 18:12:20.563 (5m2.139s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 18:12:20.563 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 18:12:20.563 Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-l4n7p to bootstrap-e2e-minion-group-s96g Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 971.317987ms (971.327027ms including waiting) Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Unhealthy: Readiness probe failed: Get "http://10.64.0.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Unhealthy: Liveness probe failed: Get "http://10.64.0.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Killing: Stopping container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Killing: Container coredns failed liveness probe, will be restarted Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Failed: Error: failed to get sandbox container task: no running task found: task ee1da3c0beb16cde0b660c004353384fc19f8a2377b29f81fd02e1d3e5b59fb9 not found: not found Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-l4n7p Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-l4n7p Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-wbh56 to bootstrap-e2e-minion-group-9h8t Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.16200824s (3.162016014s including waiting) Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: Get "http://10.64.2.7:8181/ready": dial tcp 10.64.2.7:8181: connect: connection refused Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-wbh56_kube-system(dcc02a24-e34f-4aee-8574-9dff7dafcb7d) Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: Get "http://10.64.2.12:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: Get "http://10.64.2.22:8181/ready": dial tcp 10.64.2.22:8181: connect: connection refused Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: Get "http://10.64.2.26:8181/ready": dial tcp 10.64.2.26:8181: connect: connection refused Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-wbh56_kube-system(dcc02a24-e34f-4aee-8574-9dff7dafcb7d) Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-wbh56 Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-wbh56_kube-system(dcc02a24-e34f-4aee-8574-9dff7dafcb7d) Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-wbh56 Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-l4n7p Jan 29 18:12:20.620: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 18:12:20.620: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 18:12:20.620: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 18:12:20.620: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 18:12:20.620: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 18:12:20.620: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 18:12:20.620: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 18:12:20.620: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 18:12:20.620: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 18:12:20.620: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 18:12:20.620: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 18:12:20.620: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_a4d04 became leader Jan 29 18:12:20.620: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_56ded became leader Jan 29 18:12:20.620: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_e42fb became leader Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-bp4qk to bootstrap-e2e-minion-group-dsnz Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 599.898279ms (599.907942ms including waiting) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "http://10.64.1.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Killing: Stopping container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Failed: Error: failed to get sandbox container task: no running task found: task 34af16972f1a15f7cd3de2359f5283edc4cb1afaaa95c05825bdfd8c875871a7 not found: not found Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "http://10.64.1.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-ksl2d to bootstrap-e2e-minion-group-s96g Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 589.803177ms (589.813561ms including waiting) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Killing: Stopping container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Unhealthy: Liveness probe failed: Get "http://10.64.0.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Failed: Error: failed to get sandbox container task: no running task found: task 0ac9f140b699b69eb44f2572006896f1eae931c0983a4f39deffc55da2ac125d not found: not found Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-ksl2d_kube-system(42ec1e63-2728-4047-9c5d-36e785eb0141) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-mn6xc to bootstrap-e2e-minion-group-9h8t Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 681.556582ms (681.572388ms including waiting) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Liveness probe failed: Get "http://10.64.2.14:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Failed: Error: failed to get sandbox container task: no running task found: task 21c426eded0fc015f1ab3856fd138eba814545aef659a4d560d8a1cd814f6bd1 not found: not found Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-mn6xc_kube-system(fa7260b8-fd37-4dba-8214-14e74d09aef2) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Liveness probe failed: Get "http://10.64.2.16:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-mn6xc_kube-system(fa7260b8-fd37-4dba-8214-14e74d09aef2) Jan 29 18:12:20.621: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:12:20.621: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container konnectivity-agent Jan 29 18:12:20.621: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container konnectivity-agent Jan 29 18:12:20.621: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container konnectivity-agent Jan 29 18:12:20.621: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-mn6xc_kube-system(fa7260b8-fd37-4dba-8214-14e74d09aef2) Jan 29 18:12:20.621: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-mn6xc Jan 29 18:12:20.621: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-ksl2d Jan 29 18:12:20.621: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-bp4qk Jan 29 18:12:20.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 18:12:20.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 18:12:20.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 18:12:20.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://127.0.0.1:8133/healthz": dial tcp 127.0.0.1:8133: connect: connection refused Jan 29 18:12:20.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 18:12:20.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 18:12:20.621: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 18:12:20.621: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 18:12:20.621: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 18:12:20.621: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 18:12:20.621: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 18:12:20.621: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 18:12:20.621: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 18:12:20.621: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 18:12:20.621: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 18:12:20.621: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 18:12:20.621: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 18:12:20.621: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 18:12:20.621: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 18:12:20.621: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8973717d-b4ea-4827-92b8-c82ef47ba807 became leader Jan 29 18:12:20.621: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_284471f6-f43b-49f3-ab98-bff9e88f88c0 became leader Jan 29 18:12:20.621: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_72fddcf4-b350-465c-9671-5552ed476fbc became leader Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {default-scheduler } FailedScheduling: 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-9smhj to bootstrap-e2e-minion-group-9h8t Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.941795269s (1.941803615s including waiting) Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container autoscaler Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container autoscaler Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container autoscaler Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container autoscaler Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container autoscaler Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-9smhj_kube-system(7269d21a-8222-4363-800b-6662fd8f87a9) Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-9smhj Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-9smhj Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-9h8t_kube-system(aa9fac52dcd6313a298b129133e69882) Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Killing: Stopping container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-dsnz_kube-system(4f6c109bb0f65648d820240fca6d0382) Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Killing: Stopping container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 18:12:20.621: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 18:12:20.621: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 18:12:20.621: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 18:12:20.621: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_be71f07a-21fc-4f39-aa70-aeae362a8313 became leader Jan 29 18:12:20.621: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_0d5a2b19-5601-408b-a47c-76493d5996e8 became leader Jan 29 18:12:20.621: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_f6e75d31-8d47-43d3-83a7-d2209fd23f64 became leader Jan 29 18:12:20.621: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_11c8be03-c1cc-493a-8694-faac9b6108ed became leader Jan 29 18:12:20.621: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ca187ae7-92eb-4516-879e-5110d01cd353 became leader Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {default-scheduler } FailedScheduling: 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-xnppl to bootstrap-e2e-minion-group-9h8t Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 2.153078114s (2.153091476s including waiting) Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container default-http-backend Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container default-http-backend Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Liveness probe failed: Get "http://10.64.2.6:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container default-http-backend Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container default-http-backend Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container default-http-backend Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-xnppl Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container default-http-backend Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-xnppl Jan 29 18:12:20.621: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 18:12:20.621: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 18:12:20.621: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 18:12:20.621: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 18:12:20.621: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 18:12:20.621: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 18:12:20.621: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-4xsdn to bootstrap-e2e-minion-group-s96g Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 739.554608ms (739.575415ms including waiting) Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.702613211s (1.702626118s including waiting) Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-8v287 to bootstrap-e2e-minion-group-dsnz Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 739.759923ms (739.769687ms including waiting) Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.535493899s (1.535504014s including waiting) Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-dnsxr to bootstrap-e2e-minion-group-9h8t Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 703.801985ms (703.819535ms including waiting) Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.694396463s (1.694410106s including waiting) Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-glg4c to bootstrap-e2e-master Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 702.853427ms (702.859315ms including waiting) Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.958774245s (1.958780937s including waiting) Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-4xsdn Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-8v287 Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-dnsxr Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-glg4c Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {default-scheduler } FailedScheduling: 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-gpgw8 to bootstrap-e2e-minion-group-9h8t Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.415068214s (3.415086682s including waiting) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container metrics-server Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container metrics-server Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 937.450811ms (937.460293ms including waiting) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container metrics-server-nanny Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container metrics-server-nanny Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container metrics-server Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container metrics-server-nanny Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-gpgw8 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-gpgw8 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-fpt69 to bootstrap-e2e-minion-group-dsnz Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.219802168s (1.219816016s including waiting) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metrics-server Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metrics-server Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 923.70697ms (923.715586ms including waiting) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metrics-server-nanny Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metrics-server-nanny Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Killing: Stopping container metrics-server-nanny Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Killing: Stopping container metrics-server Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metrics-server Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metrics-server Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metrics-server-nanny Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metrics-server-nanny Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Readiness probe failed: Get "https://10.64.1.8:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "https://10.64.1.8:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "https://10.64.1.8:10250/livez": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-fpt69 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-fpt69 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-fpt69 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-9h8t Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.238241255s (2.238248989s including waiting) Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(15aa184f-ad8f-486f-b5cc-f97b406e1a24) Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(15aa184f-ad8f-486f-b5cc-f97b406e1a24) Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(15aa184f-ad8f-486f-b5cc-f97b406e1a24) Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 18:12:20.621 (58ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 18:12:20.621 Jan 29 18:12:20.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 18:12:20.666 (45ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 18:12:20.666 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 18:12:20.667 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 18:12:20.667 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 18:12:20.667 STEP: Collecting events from namespace "reboot-2189". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 18:12:20.667 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 18:12:20.71 Jan 29 18:12:20.751: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 18:12:20.751: INFO: Jan 29 18:12:20.796: INFO: Logging node info for node bootstrap-e2e-master Jan 29 18:12:20.838: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master de4d0d91-417f-4d9e-8e88-821fcf72cad3 2233 0 2023-01-29 17:57:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 17:57:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 17:57:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 17:57:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 18:07:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-19/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858378752 0} {<nil>} 3767948Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596234752 0} {<nil>} 3511948Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 17:57:24 +0000 UTC,LastTransitionTime:2023-01-29 17:57:24 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:07:42 +0000 UTC,LastTransitionTime:2023-01-29 17:57:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:07:42 +0000 UTC,LastTransitionTime:2023-01-29 17:57:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:07:42 +0000 UTC,LastTransitionTime:2023-01-29 17:57:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:07:42 +0000 UTC,LastTransitionTime:2023-01-29 17:57:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.105.63.53,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-19.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-19.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:284bab09bf08f5292691fdfc4343523f,SystemUUID:284bab09-bf08-f529-2691-fdfc4343523f,BootID:33ca137b-9efb-4fae-bd1d-b736b2efdf21,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 18:12:20.838: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 18:12:20.884: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 18:12:20.944: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:20.944: INFO: Container konnectivity-server-container ready: true, restart count 2 Jan 29 18:12:20.944: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:20.944: INFO: Container kube-controller-manager ready: true, restart count 4 Jan 29 18:12:20.944: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 17:56:39 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:20.944: INFO: Container kube-addon-manager ready: true, restart count 3 Jan 29 18:12:20.944: INFO: metadata-proxy-v0.1-glg4c started at 2023-01-29 17:57:10 +0000 UTC (0+2 container statuses recorded) Jan 29 18:12:20.944: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 18:12:20.944: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 18:12:20.944: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:20.944: INFO: Container kube-apiserver ready: true, restart count 1 Jan 29 18:12:20.944: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:20.944: INFO: Container kube-scheduler ready: true, restart count 4 Jan 29 18:12:20.944: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:20.944: INFO: Container etcd-container ready: true, restart count 2 Jan 29 18:12:20.944: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:20.944: INFO: Container etcd-container ready: true, restart count 1 Jan 29 18:12:20.944: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 17:56:39 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:20.944: INFO: Container l7-lb-controller ready: true, restart count 6 Jan 29 18:12:21.156: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 18:12:21.156: INFO: Logging node info for node bootstrap-e2e-minion-group-9h8t Jan 29 18:12:21.198: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-9h8t a73caa79-09be-4952-9d8a-63d5ff2cf1d1 2468 0 2023-01-29 17:57:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-9h8t kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 17:57:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:03:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 18:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 18:09:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 18:09:24 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-19/us-west1-b/bootstrap-e2e-minion-group-9h8t,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 18:09:16 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 18:09:16 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 18:09:16 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 18:09:16 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:16 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:16 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:16 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 17:57:14 +0000 UTC,LastTransitionTime:2023-01-29 17:57:14 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:24 +0000 UTC,LastTransitionTime:2023-01-29 18:04:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:24 +0000 UTC,LastTransitionTime:2023-01-29 18:04:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:24 +0000 UTC,LastTransitionTime:2023-01-29 18:04:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:09:24 +0000 UTC,LastTransitionTime:2023-01-29 18:04:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.247.75.88,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-9h8t.c.k8s-boskos-gce-project-19.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-9h8t.c.k8s-boskos-gce-project-19.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5df247ae00a0f0d760b6034aea558213,SystemUUID:5df247ae-00a0-f0d7-60b6-034aea558213,BootID:856ae91e-496d-4106-a705-abcfc446e6ec,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 18:12:21.199: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-9h8t Jan 29 18:12:21.245: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-9h8t Jan 29 18:12:21.297: INFO: metadata-proxy-v0.1-dnsxr started at 2023-01-29 17:57:05 +0000 UTC (0+2 container statuses recorded) Jan 29 18:12:21.297: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 18:12:21.297: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 18:12:21.297: INFO: konnectivity-agent-mn6xc started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.297: INFO: Container konnectivity-agent ready: true, restart count 8 Jan 29 18:12:21.297: INFO: kube-proxy-bootstrap-e2e-minion-group-9h8t started at 2023-01-29 17:57:04 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.297: INFO: Container kube-proxy ready: true, restart count 6 Jan 29 18:12:21.297: INFO: l7-default-backend-8549d69d99-xnppl started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.297: INFO: Container default-http-backend ready: true, restart count 3 Jan 29 18:12:21.297: INFO: volume-snapshot-controller-0 started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.297: INFO: Container volume-snapshot-controller ready: true, restart count 10 Jan 29 18:12:21.297: INFO: kube-dns-autoscaler-5f6455f985-9smhj started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.297: INFO: Container autoscaler ready: false, restart count 3 Jan 29 18:12:21.297: INFO: coredns-6846b5b5f-wbh56 started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.297: INFO: Container coredns ready: false, restart count 11 Jan 29 18:12:21.465: INFO: Latency metrics for node bootstrap-e2e-minion-group-9h8t Jan 29 18:12:21.465: INFO: Logging node info for node bootstrap-e2e-minion-group-dsnz Jan 29 18:12:21.507: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-dsnz 1713c218-b219-4b0d-b0ae-fbc51ee95790 2509 0 2023-01-29 17:57:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-dsnz kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 17:57:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:08:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 18:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 18:09:34 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 18:09:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-19/us-west1-b/bootstrap-e2e-minion-group-dsnz,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 18:09:20 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:20 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:20 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:20 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 18:09:20 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 18:09:20 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 18:09:20 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 17:57:14 +0000 UTC,LastTransitionTime:2023-01-29 17:57:14 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:34 +0000 UTC,LastTransitionTime:2023-01-29 18:09:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:34 +0000 UTC,LastTransitionTime:2023-01-29 18:09:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:34 +0000 UTC,LastTransitionTime:2023-01-29 18:09:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:09:34 +0000 UTC,LastTransitionTime:2023-01-29 18:09:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.175.64,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-dsnz.c.k8s-boskos-gce-project-19.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-dsnz.c.k8s-boskos-gce-project-19.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d9c2682ffdf61476c94207a696b50d63,SystemUUID:d9c2682f-fdf6-1476-c942-07a696b50d63,BootID:b1fc4976-52e3-4021-bea5-d719e168a208,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 18:12:21.507: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-dsnz Jan 29 18:12:21.554: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-dsnz Jan 29 18:12:21.616: INFO: kube-proxy-bootstrap-e2e-minion-group-dsnz started at 2023-01-29 17:57:03 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.616: INFO: Container kube-proxy ready: true, restart count 4 Jan 29 18:12:21.616: INFO: metadata-proxy-v0.1-8v287 started at 2023-01-29 17:57:04 +0000 UTC (0+2 container statuses recorded) Jan 29 18:12:21.616: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 18:12:21.616: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 18:12:21.616: INFO: konnectivity-agent-bp4qk started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.616: INFO: Container konnectivity-agent ready: false, restart count 3 Jan 29 18:12:21.616: INFO: metrics-server-v0.5.2-867b8754b9-fpt69 started at 2023-01-29 17:57:37 +0000 UTC (0+2 container statuses recorded) Jan 29 18:12:21.616: INFO: Container metrics-server ready: false, restart count 4 Jan 29 18:12:21.616: INFO: Container metrics-server-nanny ready: false, restart count 3 Jan 29 18:12:21.774: INFO: Latency metrics for node bootstrap-e2e-minion-group-dsnz Jan 29 18:12:21.774: INFO: Logging node info for node bootstrap-e2e-minion-group-s96g Jan 29 18:12:21.816: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-s96g 7998b680-fc29-469b-9d21-811123638809 2495 0 2023-01-29 17:57:02 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-s96g kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 17:57:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:08:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 18:09:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 18:09:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 18:09:31 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-19/us-west1-b/bootstrap-e2e-minion-group-s96g,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 18:09:23 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:23 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:23 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:23 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 18:09:23 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 18:09:23 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 18:09:23 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 17:57:14 +0000 UTC,LastTransitionTime:2023-01-29 17:57:14 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:31 +0000 UTC,LastTransitionTime:2023-01-29 18:09:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:31 +0000 UTC,LastTransitionTime:2023-01-29 18:09:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:31 +0000 UTC,LastTransitionTime:2023-01-29 18:09:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:09:31 +0000 UTC,LastTransitionTime:2023-01-29 18:09:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.157.204,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-s96g.c.k8s-boskos-gce-project-19.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-s96g.c.k8s-boskos-gce-project-19.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:01be44620799a91cfe8a68e6a28b1e90,SystemUUID:01be4462-0799-a91c-fe8a-68e6a28b1e90,BootID:2d74e887-840c-4a68-8c50-3b4295ae1098,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 18:12:21.816: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-s96g Jan 29 18:12:21.864: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-s96g Jan 29 18:12:21.926: INFO: konnectivity-agent-ksl2d started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.926: INFO: Container konnectivity-agent ready: false, restart count 3 Jan 29 18:12:21.926: INFO: coredns-6846b5b5f-l4n7p started at 2023-01-29 17:57:19 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.926: INFO: Container coredns ready: false, restart count 2 Jan 29 18:12:21.926: INFO: kube-proxy-bootstrap-e2e-minion-group-s96g started at 2023-01-29 17:57:03 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.926: INFO: Container kube-proxy ready: true, restart count 3 Jan 29 18:12:21.926: INFO: metadata-proxy-v0.1-4xsdn started at 2023-01-29 17:57:03 +0000 UTC (0+2 container statuses recorded) Jan 29 18:12:21.926: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 18:12:21.926: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 18:12:22.093: INFO: Latency metrics for node bootstrap-e2e-minion-group-s96g END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 18:12:22.093 (1.426s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 18:12:22.093 (1.426s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 18:12:22.093 STEP: Destroying namespace "reboot-2189" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 18:12:22.093 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 18:12:22.137 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 18:12:22.137 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 18:12:22.137 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 18:12:20.563from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 18:06:03.654 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 18:06:03.654 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 18:06:03.654 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 18:06:03.655 Jan 29 18:06:03.655: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 18:06:03.656 Jan 29 18:06:03.696: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:06:05.736: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 18:07:18.263 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 18:07:18.344 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 18:07:18.424 (1m14.77s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 18:07:18.424 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 18:07:18.424 (0s) > Enter [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/29/23 18:07:18.424 Jan 29 18:07:18.607: INFO: Getting bootstrap-e2e-minion-group-s96g Jan 29 18:07:18.607: INFO: Getting bootstrap-e2e-minion-group-dsnz Jan 29 18:07:18.608: INFO: Getting bootstrap-e2e-minion-group-9h8t Jan 29 18:07:18.651: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-s96g condition Ready to be true Jan 29 18:07:18.651: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-dsnz condition Ready to be true Jan 29 18:07:18.668: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-9h8t condition Ready to be true Jan 29 18:07:18.695: INFO: Node bootstrap-e2e-minion-group-dsnz has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:07:18.695: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:07:18.695: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8v287" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:07:18.695: INFO: Node bootstrap-e2e-minion-group-s96g has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:07:18.695: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:07:18.695: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-4xsdn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:07:18.695: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-dsnz" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:07:18.695: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-s96g" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:07:18.709: INFO: Node bootstrap-e2e-minion-group-9h8t has 4 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-9h8t metadata-proxy-v0.1-dnsxr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-9smhj] Jan 29 18:07:18.709: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-9h8t metadata-proxy-v0.1-dnsxr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-9smhj] Jan 29 18:07:18.709: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-9smhj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:07:18.709: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-dnsxr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:07:18.709: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:07:18.709: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-9h8t" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:07:18.738: INFO: Pod "metadata-proxy-v0.1-8v287": Phase="Running", Reason="", readiness=true. Elapsed: 43.137425ms Jan 29 18:07:18.738: INFO: Pod "metadata-proxy-v0.1-8v287" satisfied condition "running and ready, or succeeded" Jan 29 18:07:18.740: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=true. Elapsed: 45.18281ms Jan 29 18:07:18.740: INFO: Pod "metadata-proxy-v0.1-4xsdn" satisfied condition "running and ready, or succeeded" Jan 29 18:07:18.740: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dsnz": Phase="Running", Reason="", readiness=true. Elapsed: 45.192454ms Jan 29 18:07:18.740: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dsnz" satisfied condition "running and ready, or succeeded" Jan 29 18:07:18.740: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:07:18.740: INFO: Getting external IP address for bootstrap-e2e-minion-group-dsnz Jan 29 18:07:18.740: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-dsnz(34.168.175.64:22) Jan 29 18:07:18.741: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s96g": Phase="Running", Reason="", readiness=true. Elapsed: 45.7081ms Jan 29 18:07:18.741: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s96g" satisfied condition "running and ready, or succeeded" Jan 29 18:07:18.741: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:07:18.741: INFO: Getting external IP address for bootstrap-e2e-minion-group-s96g Jan 29 18:07:18.741: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-s96g(35.233.157.204:22) Jan 29 18:07:18.754: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 44.364125ms Jan 29 18:07:18.754: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:18.755: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 45.218222ms Jan 29 18:07:18.755: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 18:07:18.755: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9h8t": Phase="Running", Reason="", readiness=true. Elapsed: 45.225787ms Jan 29 18:07:18.755: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9h8t" satisfied condition "running and ready, or succeeded" Jan 29 18:07:18.755: INFO: Pod "metadata-proxy-v0.1-dnsxr": Phase="Running", Reason="", readiness=true. Elapsed: 45.308215ms Jan 29 18:07:18.755: INFO: Pod "metadata-proxy-v0.1-dnsxr" satisfied condition "running and ready, or succeeded" Jan 29 18:07:19.271: INFO: ssh prow@35.233.157.204:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 18:07:19.271: INFO: ssh prow@35.233.157.204:22: stdout: "" Jan 29 18:07:19.271: INFO: ssh prow@35.233.157.204:22: stderr: "" Jan 29 18:07:19.271: INFO: ssh prow@35.233.157.204:22: exit code: 0 Jan 29 18:07:19.271: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-s96g condition Ready to be false Jan 29 18:07:19.282: INFO: ssh prow@34.168.175.64:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 18:07:19.282: INFO: ssh prow@34.168.175.64:22: stdout: "" Jan 29 18:07:19.282: INFO: ssh prow@34.168.175.64:22: stderr: "" Jan 29 18:07:19.282: INFO: ssh prow@34.168.175.64:22: exit code: 0 Jan 29 18:07:19.282: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-dsnz condition Ready to be false Jan 29 18:07:19.313: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:19.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:20.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2.087116718s Jan 29 18:07:20.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:21.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:21.367: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:22.803: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4.093155406s Jan 29 18:07:22.803: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:23.402: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:23.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:24.801: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 6.092091957s Jan 29 18:07:24.801: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:25.458: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:25.463: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:26.799: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 8.089948815s Jan 29 18:07:26.799: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:27.501: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:27.505: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:28.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 10.086820434s Jan 29 18:07:28.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:29.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:29.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:30.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 12.086559954s Jan 29 18:07:30.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:31.588: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:31.590: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:32.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 14.089127781s Jan 29 18:07:32.799: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:33.631: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:33.634: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:34.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 16.086088841s Jan 29 18:07:34.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:35.673: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:35.676: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:36.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 18.086912788s Jan 29 18:07:36.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:37.716: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:37.720: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:38.801: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 20.091644588s Jan 29 18:07:38.801: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:39.760: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:39.762: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:40.808: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 22.09867204s Jan 29 18:07:40.808: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:41.803: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:41.805: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:42.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 24.086471979s Jan 29 18:07:42.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:43.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:43.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:44.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 26.08808201s Jan 29 18:07:44.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:45.889: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:45.892: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:46.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 28.08674272s Jan 29 18:07:46.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:47.932: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:47.935: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:48.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 30.086614055s Jan 29 18:07:48.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:49.978: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:49.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:50.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 32.086821797s Jan 29 18:07:50.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:52.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:52.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:52.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 34.086717896s Jan 29 18:07:52.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:54.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:54.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:54.803: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 36.093782606s Jan 29 18:07:54.803: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:56.113: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:56.113: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:56.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 38.086720019s Jan 29 18:07:56.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:07:58.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:58.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:07:58.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 40.085946806s Jan 29 18:07:58.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:00.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:08:00.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:08:00.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 42.087611921s Jan 29 18:08:00.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:02.314: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:08:02.314: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:08:02.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 44.086524248s Jan 29 18:08:02.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:04.358: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-dsnz condition Ready to be true Jan 29 18:08:04.358: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-s96g condition Ready to be true Jan 29 18:08:04.401: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:04.401: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:04.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 46.087945361s Jan 29 18:08:04.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:06.446: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:06.446: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:06.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 48.087149284s Jan 29 18:08:06.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:08.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:08.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:08.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 50.086433349s Jan 29 18:08:08.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:10.538: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:10.538: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:10.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 52.086294147s Jan 29 18:08:10.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:12.582: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:12.582: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:12.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 54.086657986s Jan 29 18:08:12.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:14.627: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:14.627: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:14.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 56.087867224s Jan 29 18:08:14.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:16.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:08:16.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:16.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 58.08664219s Jan 29 18:08:16.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:18.713: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:18.713: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:18.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.087400767s Jan 29 18:08:18.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:20.757: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:20.757: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:20.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.086528036s Jan 29 18:08:20.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:22.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.086078968s Jan 29 18:08:22.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:22.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:22.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:24.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.086881648s Jan 29 18:08:24.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:24.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:24.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:26.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.086411012s Jan 29 18:08:26.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:26.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:26.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:28.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.086911756s Jan 29 18:08:28.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:28.939: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:28.939: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:30.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.086752532s Jan 29 18:08:30.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:30.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:30.983: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:32.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.087728671s Jan 29 18:08:32.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:33.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:33.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:34.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.088316463s Jan 29 18:08:34.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:35.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:35.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:36.808: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.098468619s Jan 29 18:08:36.808: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:37.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:37.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:38.801: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.091420466s Jan 29 18:08:38.801: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:39.160: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:39.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:40.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.087521686s Jan 29 18:08:40.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:41.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:41.205: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:42.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.08741375s Jan 29 18:08:42.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:43.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:43.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:44.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.087394103s Jan 29 18:08:44.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:45.288: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:45.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:46.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.086195385s Jan 29 18:08:46.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:47.332: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:47.335: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:48.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.087816409s Jan 29 18:08:48.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:49.375: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:49.377: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:50.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.086344169s Jan 29 18:08:50.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:51.417: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:51.419: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:52.812: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.102694432s Jan 29 18:08:52.812: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:53.460: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:53.462: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:54.799: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.089550858s Jan 29 18:08:54.799: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:55.505: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:55.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:56.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.086789856s Jan 29 18:08:56.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:57.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:57.550: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:08:58.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.086576893s Jan 29 18:08:58.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:08:59.591: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:08:59.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:00.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.086569016s Jan 29 18:09:00.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:01.634: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:01.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:02.813: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.103640186s Jan 29 18:09:02.813: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:03.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:03.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:04.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.088107231s Jan 29 18:09:04.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:05.721: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:05.723: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:06.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.086329509s Jan 29 18:09:06.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:07.763: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:07.766: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:08.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.086615882s Jan 29 18:09:08.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:09.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:09.810: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:10.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.086891104s Jan 29 18:09:10.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:11.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:11.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:12.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.086684924s Jan 29 18:09:12.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:13.893: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:13.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:14.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.088284932s Jan 29 18:09:14.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:15.935: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:15.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:16.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.086429957s Jan 29 18:09:16.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:17.978: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:17.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:18.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.08684187s Jan 29 18:09:18.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:20.020: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:20.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:20.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.087122718s Jan 29 18:09:20.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:22.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:22.065: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:22.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.086313185s Jan 29 18:09:22.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:24.106: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:24.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:24.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.087412924s Jan 29 18:09:24.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:26.149: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:26.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:26.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.086153711s Jan 29 18:09:26.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:28.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:28.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:28.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.086751251s Jan 29 18:09:28.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:30.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:30.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:30.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.086282555s Jan 29 18:09:30.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:32.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:18 +0000 UTC}]. Failure Jan 29 18:09:32.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:08:03 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:32.799: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.090092251s Jan 29 18:09:32.800: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:34.325: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:09:34.325: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-4xsdn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:09:34.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:34.326: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-s96g" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:09:34.370: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s96g": Phase="Running", Reason="", readiness=true. Elapsed: 44.496798ms Jan 29 18:09:34.370: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s96g" satisfied condition "running and ready, or succeeded" Jan 29 18:09:34.370: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=true. Elapsed: 44.869369ms Jan 29 18:09:34.370: INFO: Pod "metadata-proxy-v0.1-4xsdn" satisfied condition "running and ready, or succeeded" Jan 29 18:09:34.370: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:09:34.370: INFO: Reboot successful on node bootstrap-e2e-minion-group-s96g Jan 29 18:09:34.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.086699223s Jan 29 18:09:34.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:36.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 18:08:08 +0000 UTC}]. Failure Jan 29 18:09:36.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.087193432s Jan 29 18:09:36.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:38.414: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:09:38.414: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8v287" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:09:38.414: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-dsnz" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:09:38.457: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dsnz": Phase="Running", Reason="", readiness=true. Elapsed: 43.23343ms Jan 29 18:09:38.457: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dsnz" satisfied condition "running and ready, or succeeded" Jan 29 18:09:38.457: INFO: Pod "metadata-proxy-v0.1-8v287": Phase="Running", Reason="", readiness=true. Elapsed: 43.470317ms Jan 29 18:09:38.457: INFO: Pod "metadata-proxy-v0.1-8v287" satisfied condition "running and ready, or succeeded" Jan 29 18:09:38.457: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:09:38.457: INFO: Reboot successful on node bootstrap-e2e-minion-group-dsnz Jan 29 18:09:38.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.085953175s Jan 29 18:09:38.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:40.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.086563057s Jan 29 18:09:40.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:42.800: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.090246739s Jan 29 18:09:42.800: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:44.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.08632516s Jan 29 18:09:44.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:46.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.086327831s Jan 29 18:09:46.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:48.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.086312886s Jan 29 18:09:48.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:50.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.08793912s Jan 29 18:09:50.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:52.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.086902686s Jan 29 18:09:52.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:54.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.08828119s Jan 29 18:09:54.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:56.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.088350779s Jan 29 18:09:56.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:09:58.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.086725784s Jan 29 18:09:58.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:00.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.08655078s Jan 29 18:10:00.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:02.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.087076826s Jan 29 18:10:02.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:04.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.086715306s Jan 29 18:10:04.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:06.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.087630051s Jan 29 18:10:06.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:08.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m50.0868895s Jan 29 18:10:08.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:10.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m52.086199848s Jan 29 18:10:10.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:12.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m54.087784763s Jan 29 18:10:12.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:14.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m56.08606828s Jan 29 18:10:14.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:16.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m58.086144651s Jan 29 18:10:16.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:18.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m0.086726939s Jan 29 18:10:18.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:20.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m2.086404477s Jan 29 18:10:20.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:22.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m4.086737553s Jan 29 18:10:22.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:24.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m6.086314332s Jan 29 18:10:24.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:26.846: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m8.136475082s Jan 29 18:10:26.846: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:28.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m10.087942928s Jan 29 18:10:28.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:30.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.087292635s Jan 29 18:10:30.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:32.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.087812344s Jan 29 18:10:32.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:34.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.086278027s Jan 29 18:10:34.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:36.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.087083035s Jan 29 18:10:36.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:38.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.08887743s Jan 29 18:10:38.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:40.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.086881243s Jan 29 18:10:40.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:42.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.086290566s Jan 29 18:10:42.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:44.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.08793661s Jan 29 18:10:44.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:46.800: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.091089169s Jan 29 18:10:46.801: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:48.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.085769978s Jan 29 18:10:48.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:50.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.086167971s Jan 29 18:10:50.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:52.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.08790209s Jan 29 18:10:52.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:54.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.087333754s Jan 29 18:10:54.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:56.804: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.094914442s Jan 29 18:10:56.804: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:10:58.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.086771537s Jan 29 18:10:58.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:00.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.086424576s Jan 29 18:11:00.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:02.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.086564498s Jan 29 18:11:02.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:04.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.087636951s Jan 29 18:11:04.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:06.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.087166064s Jan 29 18:11:06.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:08.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.086449935s Jan 29 18:11:08.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:10.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.086233731s Jan 29 18:11:10.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:12.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.087867598s Jan 29 18:11:12.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:14.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.08903186s Jan 29 18:11:14.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:16.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.085884416s Jan 29 18:11:16.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:18.800: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.090611353s Jan 29 18:11:18.800: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:20.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.086902263s Jan 29 18:11:20.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:22.806: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.096774561s Jan 29 18:11:22.806: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:24.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.08809698s Jan 29 18:11:24.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:26.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.086558222s Jan 29 18:11:26.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:28.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.087562838s Jan 29 18:11:28.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:30.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.087671846s Jan 29 18:11:30.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:32.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.086973983s Jan 29 18:11:32.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:34.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.085989247s Jan 29 18:11:34.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:36.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.086637779s Jan 29 18:11:36.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:38.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.086973984s Jan 29 18:11:38.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:40.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.086526397s Jan 29 18:11:40.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:42.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.088790476s Jan 29 18:11:42.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:44.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.087479242s Jan 29 18:11:44.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:46.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.086766907s Jan 29 18:11:46.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:48.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.086012392s Jan 29 18:11:48.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:50.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.086528782s Jan 29 18:11:50.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:52.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.087626751s Jan 29 18:11:52.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:54.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.085804187s Jan 29 18:11:54.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:56.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.086642164s Jan 29 18:11:56.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:11:58.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.0861438s Jan 29 18:11:58.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:00.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.086149194s Jan 29 18:12:00.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:02.795: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.085826852s Jan 29 18:12:02.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:04.804: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.094372684s Jan 29 18:12:04.804: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:06.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.087714086s Jan 29 18:12:06.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:08.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.086175612s Jan 29 18:12:08.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:10.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.086316251s Jan 29 18:12:10.796: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:12.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.087745106s Jan 29 18:12:12.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:14.798: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.088326199s Jan 29 18:12:14.798: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:16.797: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.087781502s Jan 29 18:12:16.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards (Spec Runtime: 6m14.77s) test/e2e/cloud/gcp/reboot.go:144 In [It] (Node Runtime: 5m0s) test/e2e/cloud/gcp/reboot.go:144 Spec Goroutine goroutine 3060 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc000a8ea68?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f8de01232c0?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f8de01232c0?, 0xc004bd4840}, {0x8147108?, 0xc002018000}, {0xc0001cc820, 0x187}, 0xc004dad3e0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.8({0x7f8de01232c0, 0xc004bd4840}) test/e2e/cloud/gcp/reboot.go:149 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc004bd4840}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 3043 [chan receive, 5 minutes] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x7f8de01232c0?, 0xc004bd4840}, {0x8147108?, 0xc002018000}, {0x76d190b, 0xb}, {0xc001104f40, 0x4, 0x4}, 0x45d964b800, ...) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReadyOrSucceeded(...) test/e2e/framework/pod/resource.go:508 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f8de01232c0, 0xc004bd4840}, {0x8147108, 0xc002018000}, {0x7ffd0e8245ee, 0x3}, {0xc00132c720, 0x1f}, {0xc0001cc820, 0x187}) test/e2e/cloud/gcp/reboot.go:284 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 18:12:18.796: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.087079846s Jan 29 18:12:18.797: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:18.838: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.128410921s Jan 29 18:12:18.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:18.838: INFO: Pod kube-dns-autoscaler-5f6455f985-9smhj failed to be running and ready, or succeeded. Jan 29 18:12:18.838: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-9h8t metadata-proxy-v0.1-dnsxr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-9smhj] Jan 29 18:12:18.838: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-9smhj: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 17:57:14 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 18:03:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 18:04:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 17:57:14 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP: PodIPs:[] StartTime:2023-01-29 17:57:14 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 18:02:46 +0000 UTC,FinishedAt:2023-01-29 18:03:27 +0000 UTC,ContainerID:containerd://950ea0c01909be3e17165f748ab6c2d38a95a221cf18aba5f3ab884dd49d543c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:3 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://950ea0c01909be3e17165f748ab6c2d38a95a221cf18aba5f3ab884dd49d543c Started:0xc004910857}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 29 18:12:18.986: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-9smhj/autoscaler: Jan 29 18:12:18.986: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-9smhj/autoscaler: Jan 29 18:12:18.986: INFO: Node bootstrap-e2e-minion-group-9h8t failed reboot test. Jan 29 18:12:18.986: INFO: Executing termination hook on nodes Jan 29 18:12:18.986: INFO: Getting external IP address for bootstrap-e2e-minion-group-9h8t Jan 29 18:12:18.986: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-9h8t(35.247.75.88:22) Jan 29 18:12:19.524: INFO: ssh prow@35.247.75.88:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 18:12:19.524: INFO: ssh prow@35.247.75.88:22: stdout: "" Jan 29 18:12:19.524: INFO: ssh prow@35.247.75.88:22: stderr: "cat: /tmp/drop-outbound.log: No such file or directory\n" Jan 29 18:12:19.524: INFO: ssh prow@35.247.75.88:22: exit code: 1 Jan 29 18:12:19.524: INFO: Error while issuing ssh command: failed running "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log": <nil> (exit code 1, stderr cat: /tmp/drop-outbound.log: No such file or directory ) Jan 29 18:12:19.524: INFO: Getting external IP address for bootstrap-e2e-minion-group-dsnz Jan 29 18:12:19.524: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-dsnz(34.168.175.64:22) Jan 29 18:12:20.051: INFO: ssh prow@34.168.175.64:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 18:12:20.051: INFO: ssh prow@34.168.175.64:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 18:07:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 18:12:20.051: INFO: ssh prow@34.168.175.64:22: stderr: "" Jan 29 18:12:20.051: INFO: ssh prow@34.168.175.64:22: exit code: 0 Jan 29 18:12:20.051: INFO: Getting external IP address for bootstrap-e2e-minion-group-s96g Jan 29 18:12:20.051: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-s96g(35.233.157.204:22) Jan 29 18:12:20.563: INFO: ssh prow@35.233.157.204:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 18:12:20.563: INFO: ssh prow@35.233.157.204:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 18:07:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 18:12:20.563: INFO: ssh prow@35.233.157.204:22: stderr: "" Jan 29 18:12:20.563: INFO: ssh prow@35.233.157.204:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 18:12:20.563 < Exit [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/29/23 18:12:20.563 (5m2.139s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 18:12:20.563 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 18:12:20.563 Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-l4n7p to bootstrap-e2e-minion-group-s96g Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 971.317987ms (971.327027ms including waiting) Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Unhealthy: Readiness probe failed: Get "http://10.64.0.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Unhealthy: Liveness probe failed: Get "http://10.64.0.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Killing: Stopping container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Killing: Container coredns failed liveness probe, will be restarted Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Failed: Error: failed to get sandbox container task: no running task found: task ee1da3c0beb16cde0b660c004353384fc19f8a2377b29f81fd02e1d3e5b59fb9 not found: not found Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-l4n7p Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-l4n7p: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-l4n7p Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-wbh56 to bootstrap-e2e-minion-group-9h8t Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.16200824s (3.162016014s including waiting) Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: Get "http://10.64.2.7:8181/ready": dial tcp 10.64.2.7:8181: connect: connection refused Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-wbh56_kube-system(dcc02a24-e34f-4aee-8574-9dff7dafcb7d) Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: Get "http://10.64.2.12:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: Get "http://10.64.2.22:8181/ready": dial tcp 10.64.2.22:8181: connect: connection refused Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: Get "http://10.64.2.26:8181/ready": dial tcp 10.64.2.26:8181: connect: connection refused Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-wbh56_kube-system(dcc02a24-e34f-4aee-8574-9dff7dafcb7d) Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-wbh56 Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container coredns Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-wbh56_kube-system(dcc02a24-e34f-4aee-8574-9dff7dafcb7d) Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-wbh56 Jan 29 18:12:20.620: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-l4n7p Jan 29 18:12:20.620: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 18:12:20.620: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 18:12:20.620: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 18:12:20.620: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 18:12:20.620: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 18:12:20.620: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 18:12:20.620: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 18:12:20.620: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 18:12:20.620: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 18:12:20.620: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 18:12:20.620: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 18:12:20.620: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_a4d04 became leader Jan 29 18:12:20.620: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_56ded became leader Jan 29 18:12:20.620: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_e42fb became leader Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-bp4qk to bootstrap-e2e-minion-group-dsnz Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 599.898279ms (599.907942ms including waiting) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "http://10.64.1.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Killing: Stopping container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Failed: Error: failed to get sandbox container task: no running task found: task 34af16972f1a15f7cd3de2359f5283edc4cb1afaaa95c05825bdfd8c875871a7 not found: not found Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "http://10.64.1.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-ksl2d to bootstrap-e2e-minion-group-s96g Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 589.803177ms (589.813561ms including waiting) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Killing: Stopping container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Unhealthy: Liveness probe failed: Get "http://10.64.0.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Failed: Error: failed to get sandbox container task: no running task found: task 0ac9f140b699b69eb44f2572006896f1eae931c0983a4f39deffc55da2ac125d not found: not found Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-ksl2d_kube-system(42ec1e63-2728-4047-9c5d-36e785eb0141) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-mn6xc to bootstrap-e2e-minion-group-9h8t Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 681.556582ms (681.572388ms including waiting) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Liveness probe failed: Get "http://10.64.2.14:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Failed: Error: failed to get sandbox container task: no running task found: task 21c426eded0fc015f1ab3856fd138eba814545aef659a4d560d8a1cd814f6bd1 not found: not found Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-mn6xc_kube-system(fa7260b8-fd37-4dba-8214-14e74d09aef2) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Liveness probe failed: Get "http://10.64.2.16:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container konnectivity-agent Jan 29 18:12:20.620: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-mn6xc_kube-system(fa7260b8-fd37-4dba-8214-14e74d09aef2) Jan 29 18:12:20.621: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:12:20.621: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container konnectivity-agent Jan 29 18:12:20.621: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container konnectivity-agent Jan 29 18:12:20.621: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container konnectivity-agent Jan 29 18:12:20.621: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-mn6xc_kube-system(fa7260b8-fd37-4dba-8214-14e74d09aef2) Jan 29 18:12:20.621: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-mn6xc Jan 29 18:12:20.621: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-ksl2d Jan 29 18:12:20.621: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-bp4qk Jan 29 18:12:20.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 18:12:20.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 18:12:20.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 18:12:20.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://127.0.0.1:8133/healthz": dial tcp 127.0.0.1:8133: connect: connection refused Jan 29 18:12:20.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 18:12:20.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 18:12:20.621: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 18:12:20.621: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 18:12:20.621: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 18:12:20.621: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 18:12:20.621: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 18:12:20.621: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 18:12:20.621: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 18:12:20.621: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 18:12:20.621: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 18:12:20.621: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 18:12:20.621: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 18:12:20.621: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 18:12:20.621: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 18:12:20.621: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8973717d-b4ea-4827-92b8-c82ef47ba807 became leader Jan 29 18:12:20.621: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_284471f6-f43b-49f3-ab98-bff9e88f88c0 became leader Jan 29 18:12:20.621: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_72fddcf4-b350-465c-9671-5552ed476fbc became leader Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {default-scheduler } FailedScheduling: 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-9smhj to bootstrap-e2e-minion-group-9h8t Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.941795269s (1.941803615s including waiting) Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container autoscaler Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container autoscaler Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container autoscaler Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container autoscaler Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container autoscaler Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-9smhj_kube-system(7269d21a-8222-4363-800b-6662fd8f87a9) Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-9smhj Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-9smhj Jan 29 18:12:20.621: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-9h8t_kube-system(aa9fac52dcd6313a298b129133e69882) Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Killing: Stopping container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-dsnz_kube-system(4f6c109bb0f65648d820240fca6d0382) Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Killing: Stopping container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container kube-proxy Jan 29 18:12:20.621: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:12:20.621: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 18:12:20.621: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 18:12:20.621: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 18:12:20.621: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 18:12:20.621: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_be71f07a-21fc-4f39-aa70-aeae362a8313 became leader Jan 29 18:12:20.621: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_0d5a2b19-5601-408b-a47c-76493d5996e8 became leader Jan 29 18:12:20.621: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_f6e75d31-8d47-43d3-83a7-d2209fd23f64 became leader Jan 29 18:12:20.621: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_11c8be03-c1cc-493a-8694-faac9b6108ed became leader Jan 29 18:12:20.621: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ca187ae7-92eb-4516-879e-5110d01cd353 became leader Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {default-scheduler } FailedScheduling: 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-xnppl to bootstrap-e2e-minion-group-9h8t Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 2.153078114s (2.153091476s including waiting) Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container default-http-backend Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container default-http-backend Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Liveness probe failed: Get "http://10.64.2.6:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container default-http-backend Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container default-http-backend Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container default-http-backend Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-xnppl Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container default-http-backend Jan 29 18:12:20.621: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-xnppl Jan 29 18:12:20.621: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 18:12:20.621: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 18:12:20.621: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 18:12:20.621: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 18:12:20.621: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 18:12:20.621: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 18:12:20.621: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-4xsdn to bootstrap-e2e-minion-group-s96g Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 739.554608ms (739.575415ms including waiting) Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.702613211s (1.702626118s including waiting) Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-4xsdn: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-8v287 to bootstrap-e2e-minion-group-dsnz Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 739.759923ms (739.769687ms including waiting) Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.535493899s (1.535504014s including waiting) Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-8v287: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-dnsxr to bootstrap-e2e-minion-group-9h8t Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 703.801985ms (703.819535ms including waiting) Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.694396463s (1.694410106s including waiting) Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-glg4c to bootstrap-e2e-master Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 702.853427ms (702.859315ms including waiting) Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.958774245s (1.958780937s including waiting) Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-4xsdn Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-8v287 Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-dnsxr Jan 29 18:12:20.621: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-glg4c Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {default-scheduler } FailedScheduling: 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-gpgw8 to bootstrap-e2e-minion-group-9h8t Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.415068214s (3.415086682s including waiting) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container metrics-server Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container metrics-server Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 937.450811ms (937.460293ms including waiting) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container metrics-server-nanny Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container metrics-server-nanny Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container metrics-server Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container metrics-server-nanny Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-gpgw8 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-gpgw8 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-fpt69 to bootstrap-e2e-minion-group-dsnz Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.219802168s (1.219816016s including waiting) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metrics-server Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metrics-server Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 923.70697ms (923.715586ms including waiting) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metrics-server-nanny Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metrics-server-nanny Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Killing: Stopping container metrics-server-nanny Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Killing: Stopping container metrics-server Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metrics-server Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metrics-server Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metrics-server-nanny Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metrics-server-nanny Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Readiness probe failed: Get "https://10.64.1.8:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "https://10.64.1.8:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "https://10.64.1.8:10250/livez": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-fpt69 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-fpt69 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-fpt69 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 18:12:20.621: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-9h8t Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.238241255s (2.238248989s including waiting) Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(15aa184f-ad8f-486f-b5cc-f97b406e1a24) Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(15aa184f-ad8f-486f-b5cc-f97b406e1a24) Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container volume-snapshot-controller Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(15aa184f-ad8f-486f-b5cc-f97b406e1a24) Jan 29 18:12:20.621: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 18:12:20.621 (58ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 18:12:20.621 Jan 29 18:12:20.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 18:12:20.666 (45ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 18:12:20.666 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 18:12:20.667 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 18:12:20.667 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 18:12:20.667 STEP: Collecting events from namespace "reboot-2189". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 18:12:20.667 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 18:12:20.71 Jan 29 18:12:20.751: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 18:12:20.751: INFO: Jan 29 18:12:20.796: INFO: Logging node info for node bootstrap-e2e-master Jan 29 18:12:20.838: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master de4d0d91-417f-4d9e-8e88-821fcf72cad3 2233 0 2023-01-29 17:57:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 17:57:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 17:57:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 17:57:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 18:07:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-19/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858378752 0} {<nil>} 3767948Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596234752 0} {<nil>} 3511948Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 17:57:24 +0000 UTC,LastTransitionTime:2023-01-29 17:57:24 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:07:42 +0000 UTC,LastTransitionTime:2023-01-29 17:57:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:07:42 +0000 UTC,LastTransitionTime:2023-01-29 17:57:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:07:42 +0000 UTC,LastTransitionTime:2023-01-29 17:57:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:07:42 +0000 UTC,LastTransitionTime:2023-01-29 17:57:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.105.63.53,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-19.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-19.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:284bab09bf08f5292691fdfc4343523f,SystemUUID:284bab09-bf08-f529-2691-fdfc4343523f,BootID:33ca137b-9efb-4fae-bd1d-b736b2efdf21,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 18:12:20.838: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 18:12:20.884: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 18:12:20.944: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:20.944: INFO: Container konnectivity-server-container ready: true, restart count 2 Jan 29 18:12:20.944: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:20.944: INFO: Container kube-controller-manager ready: true, restart count 4 Jan 29 18:12:20.944: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 17:56:39 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:20.944: INFO: Container kube-addon-manager ready: true, restart count 3 Jan 29 18:12:20.944: INFO: metadata-proxy-v0.1-glg4c started at 2023-01-29 17:57:10 +0000 UTC (0+2 container statuses recorded) Jan 29 18:12:20.944: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 18:12:20.944: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 18:12:20.944: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:20.944: INFO: Container kube-apiserver ready: true, restart count 1 Jan 29 18:12:20.944: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:20.944: INFO: Container kube-scheduler ready: true, restart count 4 Jan 29 18:12:20.944: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:20.944: INFO: Container etcd-container ready: true, restart count 2 Jan 29 18:12:20.944: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:20.944: INFO: Container etcd-container ready: true, restart count 1 Jan 29 18:12:20.944: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 17:56:39 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:20.944: INFO: Container l7-lb-controller ready: true, restart count 6 Jan 29 18:12:21.156: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 18:12:21.156: INFO: Logging node info for node bootstrap-e2e-minion-group-9h8t Jan 29 18:12:21.198: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-9h8t a73caa79-09be-4952-9d8a-63d5ff2cf1d1 2468 0 2023-01-29 17:57:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-9h8t kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 17:57:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:03:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 18:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 18:09:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 18:09:24 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-19/us-west1-b/bootstrap-e2e-minion-group-9h8t,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 18:09:16 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 18:09:16 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 18:09:16 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 18:09:16 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:16 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:16 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:16 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 17:57:14 +0000 UTC,LastTransitionTime:2023-01-29 17:57:14 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:24 +0000 UTC,LastTransitionTime:2023-01-29 18:04:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:24 +0000 UTC,LastTransitionTime:2023-01-29 18:04:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:24 +0000 UTC,LastTransitionTime:2023-01-29 18:04:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:09:24 +0000 UTC,LastTransitionTime:2023-01-29 18:04:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.247.75.88,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-9h8t.c.k8s-boskos-gce-project-19.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-9h8t.c.k8s-boskos-gce-project-19.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5df247ae00a0f0d760b6034aea558213,SystemUUID:5df247ae-00a0-f0d7-60b6-034aea558213,BootID:856ae91e-496d-4106-a705-abcfc446e6ec,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 18:12:21.199: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-9h8t Jan 29 18:12:21.245: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-9h8t Jan 29 18:12:21.297: INFO: metadata-proxy-v0.1-dnsxr started at 2023-01-29 17:57:05 +0000 UTC (0+2 container statuses recorded) Jan 29 18:12:21.297: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 18:12:21.297: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 18:12:21.297: INFO: konnectivity-agent-mn6xc started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.297: INFO: Container konnectivity-agent ready: true, restart count 8 Jan 29 18:12:21.297: INFO: kube-proxy-bootstrap-e2e-minion-group-9h8t started at 2023-01-29 17:57:04 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.297: INFO: Container kube-proxy ready: true, restart count 6 Jan 29 18:12:21.297: INFO: l7-default-backend-8549d69d99-xnppl started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.297: INFO: Container default-http-backend ready: true, restart count 3 Jan 29 18:12:21.297: INFO: volume-snapshot-controller-0 started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.297: INFO: Container volume-snapshot-controller ready: true, restart count 10 Jan 29 18:12:21.297: INFO: kube-dns-autoscaler-5f6455f985-9smhj started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.297: INFO: Container autoscaler ready: false, restart count 3 Jan 29 18:12:21.297: INFO: coredns-6846b5b5f-wbh56 started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.297: INFO: Container coredns ready: false, restart count 11 Jan 29 18:12:21.465: INFO: Latency metrics for node bootstrap-e2e-minion-group-9h8t Jan 29 18:12:21.465: INFO: Logging node info for node bootstrap-e2e-minion-group-dsnz Jan 29 18:12:21.507: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-dsnz 1713c218-b219-4b0d-b0ae-fbc51ee95790 2509 0 2023-01-29 17:57:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-dsnz kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 17:57:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:08:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 18:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 18:09:34 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 18:09:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-19/us-west1-b/bootstrap-e2e-minion-group-dsnz,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 18:09:20 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:20 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:20 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:20 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 18:09:20 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 18:09:20 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 18:09:20 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 17:57:14 +0000 UTC,LastTransitionTime:2023-01-29 17:57:14 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:34 +0000 UTC,LastTransitionTime:2023-01-29 18:09:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:34 +0000 UTC,LastTransitionTime:2023-01-29 18:09:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:34 +0000 UTC,LastTransitionTime:2023-01-29 18:09:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:09:34 +0000 UTC,LastTransitionTime:2023-01-29 18:09:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.175.64,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-dsnz.c.k8s-boskos-gce-project-19.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-dsnz.c.k8s-boskos-gce-project-19.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d9c2682ffdf61476c94207a696b50d63,SystemUUID:d9c2682f-fdf6-1476-c942-07a696b50d63,BootID:b1fc4976-52e3-4021-bea5-d719e168a208,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 18:12:21.507: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-dsnz Jan 29 18:12:21.554: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-dsnz Jan 29 18:12:21.616: INFO: kube-proxy-bootstrap-e2e-minion-group-dsnz started at 2023-01-29 17:57:03 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.616: INFO: Container kube-proxy ready: true, restart count 4 Jan 29 18:12:21.616: INFO: metadata-proxy-v0.1-8v287 started at 2023-01-29 17:57:04 +0000 UTC (0+2 container statuses recorded) Jan 29 18:12:21.616: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 18:12:21.616: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 18:12:21.616: INFO: konnectivity-agent-bp4qk started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.616: INFO: Container konnectivity-agent ready: false, restart count 3 Jan 29 18:12:21.616: INFO: metrics-server-v0.5.2-867b8754b9-fpt69 started at 2023-01-29 17:57:37 +0000 UTC (0+2 container statuses recorded) Jan 29 18:12:21.616: INFO: Container metrics-server ready: false, restart count 4 Jan 29 18:12:21.616: INFO: Container metrics-server-nanny ready: false, restart count 3 Jan 29 18:12:21.774: INFO: Latency metrics for node bootstrap-e2e-minion-group-dsnz Jan 29 18:12:21.774: INFO: Logging node info for node bootstrap-e2e-minion-group-s96g Jan 29 18:12:21.816: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-s96g 7998b680-fc29-469b-9d21-811123638809 2495 0 2023-01-29 17:57:02 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-s96g kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 17:57:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:08:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 18:09:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 18:09:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 18:09:31 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-19/us-west1-b/bootstrap-e2e-minion-group-s96g,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 18:09:23 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:23 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:23 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 18:09:23 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 18:09:23 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 18:09:23 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 18:09:23 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 17:57:14 +0000 UTC,LastTransitionTime:2023-01-29 17:57:14 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:31 +0000 UTC,LastTransitionTime:2023-01-29 18:09:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:31 +0000 UTC,LastTransitionTime:2023-01-29 18:09:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:09:31 +0000 UTC,LastTransitionTime:2023-01-29 18:09:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:09:31 +0000 UTC,LastTransitionTime:2023-01-29 18:09:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.157.204,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-s96g.c.k8s-boskos-gce-project-19.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-s96g.c.k8s-boskos-gce-project-19.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:01be44620799a91cfe8a68e6a28b1e90,SystemUUID:01be4462-0799-a91c-fe8a-68e6a28b1e90,BootID:2d74e887-840c-4a68-8c50-3b4295ae1098,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 18:12:21.816: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-s96g Jan 29 18:12:21.864: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-s96g Jan 29 18:12:21.926: INFO: konnectivity-agent-ksl2d started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.926: INFO: Container konnectivity-agent ready: false, restart count 3 Jan 29 18:12:21.926: INFO: coredns-6846b5b5f-l4n7p started at 2023-01-29 17:57:19 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.926: INFO: Container coredns ready: false, restart count 2 Jan 29 18:12:21.926: INFO: kube-proxy-bootstrap-e2e-minion-group-s96g started at 2023-01-29 17:57:03 +0000 UTC (0+1 container statuses recorded) Jan 29 18:12:21.926: INFO: Container kube-proxy ready: true, restart count 3 Jan 29 18:12:21.926: INFO: metadata-proxy-v0.1-4xsdn started at 2023-01-29 17:57:03 +0000 UTC (0+2 container statuses recorded) Jan 29 18:12:21.926: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 18:12:21.926: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 18:12:22.093: INFO: Latency metrics for node bootstrap-e2e-minion-group-s96g END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 18:12:22.093 (1.426s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 18:12:22.093 (1.426s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 18:12:22.093 STEP: Destroying namespace "reboot-2189" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 18:12:22.093 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 18:12:22.137 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 18:12:22.137 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 18:12:22.137 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 18:06:03.573 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 18:05:33.444 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 18:05:33.444 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 18:05:33.444 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 18:05:33.444 Jan 29 18:05:33.444: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 18:05:33.45 Jan 29 18:05:33.493: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:35.537: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:37.533: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:39.535: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:41.535: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:43.533: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:45.534: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:47.541: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:49.533: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:51.537: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:53.534: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:55.533: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:57.533: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:59.533: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:06:01.533: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:06:03.533: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:06:03.572: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:06:03.572: INFO: Unexpected error: <*errors.errorString | 0xc0001cba70>: { s: "timed out waiting for the condition", } [FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 18:06:03.573 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 18:06:03.573 (30.129s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 18:06:03.573 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 18:06:03.573 Jan 29 18:06:03.612: INFO: Unexpected error: <*url.Error | 0xc0056b5ef0>: { Op: "Get", URL: "https://34.105.63.53/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc003f441e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00374b260>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 105, 63, 53], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0013f19e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://34.105.63.53/api/v1/namespaces/kube-system/events": dial tcp 34.105.63.53:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/29/23 18:06:03.613 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 18:06:03.613 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 18:06:03.613 Jan 29 18:06:03.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 18:06:03.652 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 18:06:03.652 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 18:06:03.652 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 18:06:03.652 (0s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 18:06:03.652 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 18:06:03.652 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 18:06:03.652 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 18:06:03.652 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 18:06:03.652 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 18:06:03.573 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 18:05:33.444 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 18:05:33.444 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 18:05:33.444 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 18:05:33.444 Jan 29 18:05:33.444: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 18:05:33.45 Jan 29 18:05:33.493: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:35.537: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:37.533: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:39.535: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:41.535: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:43.533: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:45.534: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:47.541: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:49.533: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:51.537: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:53.534: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:55.533: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:57.533: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:05:59.533: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:06:01.533: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:06:03.533: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:06:03.572: INFO: Unexpected error while creating namespace: Post "https://34.105.63.53/api/v1/namespaces": dial tcp 34.105.63.53:443: connect: connection refused Jan 29 18:06:03.572: INFO: Unexpected error: <*errors.errorString | 0xc0001cba70>: { s: "timed out waiting for the condition", } [FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 18:06:03.573 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 18:06:03.573 (30.129s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 18:06:03.573 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 18:06:03.573 Jan 29 18:06:03.612: INFO: Unexpected error: <*url.Error | 0xc0056b5ef0>: { Op: "Get", URL: "https://34.105.63.53/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc003f441e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00374b260>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 105, 63, 53], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0013f19e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://34.105.63.53/api/v1/namespaces/kube-system/events": dial tcp 34.105.63.53:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/29/23 18:06:03.613 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 18:06:03.613 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 18:06:03.613 Jan 29 18:06:03.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 18:06:03.652 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 18:06:03.652 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 18:06:03.652 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 18:06:03.652 (0s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 18:06:03.652 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 18:06:03.652 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 18:06:03.652 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 18:06:03.652 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 18:06:03.652 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sswitching\soff\sthe\snetwork\sinterface\sand\sensure\sthey\sfunction\supon\sswitch\son$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 18:17:23.193from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 18:12:22.47 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 18:12:22.47 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 18:12:22.47 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 18:12:22.47 Jan 29 18:12:22.470: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 18:12:22.472 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 18:12:22.597 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 18:12:22.677 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 18:12:22.767 (296ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 18:12:22.767 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 18:12:22.767 (0s) > Enter [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/29/23 18:12:22.767 Jan 29 18:12:22.913: INFO: Getting bootstrap-e2e-minion-group-s96g Jan 29 18:12:22.913: INFO: Getting bootstrap-e2e-minion-group-dsnz Jan 29 18:12:22.913: INFO: Getting bootstrap-e2e-minion-group-9h8t Jan 29 18:12:22.959: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-dsnz condition Ready to be true Jan 29 18:12:22.959: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-9h8t condition Ready to be true Jan 29 18:12:22.959: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-s96g condition Ready to be true Jan 29 18:12:23.001: INFO: Node bootstrap-e2e-minion-group-dsnz has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:12:23.001: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:12:23.001: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8v287" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:12:23.001: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-dsnz" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:12:23.003: INFO: Node bootstrap-e2e-minion-group-9h8t has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-9smhj kube-proxy-bootstrap-e2e-minion-group-9h8t metadata-proxy-v0.1-dnsxr volume-snapshot-controller-0] Jan 29 18:12:23.003: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-9smhj kube-proxy-bootstrap-e2e-minion-group-9h8t metadata-proxy-v0.1-dnsxr volume-snapshot-controller-0] Jan 29 18:12:23.003: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:12:23.003: INFO: Node bootstrap-e2e-minion-group-s96g has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:12:23.003: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:12:23.003: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-4xsdn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:12:23.003: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-9smhj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:12:23.003: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-s96g" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:12:23.003: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-9h8t" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:12:23.003: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-dnsxr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:12:23.046: INFO: Pod "metadata-proxy-v0.1-8v287": Phase="Running", Reason="", readiness=true. Elapsed: 44.670697ms Jan 29 18:12:23.046: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dsnz": Phase="Running", Reason="", readiness=true. Elapsed: 44.719588ms Jan 29 18:12:23.046: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dsnz" satisfied condition "running and ready, or succeeded" Jan 29 18:12:23.046: INFO: Pod "metadata-proxy-v0.1-8v287" satisfied condition "running and ready, or succeeded" Jan 29 18:12:23.046: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:12:23.046: INFO: Getting external IP address for bootstrap-e2e-minion-group-dsnz Jan 29 18:12:23.046: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-dsnz(34.168.175.64:22) Jan 29 18:12:23.049: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 46.372237ms Jan 29 18:12:23.049: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 18:12:23.050: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 47.292369ms Jan 29 18:12:23.050: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9h8t": Phase="Running", Reason="", readiness=true. Elapsed: 47.156307ms Jan 29 18:12:23.050: INFO: Pod "metadata-proxy-v0.1-dnsxr": Phase="Running", Reason="", readiness=true. Elapsed: 47.109765ms Jan 29 18:12:23.050: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9h8t" satisfied condition "running and ready, or succeeded" Jan 29 18:12:23.050: INFO: Pod "metadata-proxy-v0.1-dnsxr" satisfied condition "running and ready, or succeeded" Jan 29 18:12:23.050: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:23.050: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s96g": Phase="Running", Reason="", readiness=true. Elapsed: 47.346638ms Jan 29 18:12:23.050: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s96g" satisfied condition "running and ready, or succeeded" Jan 29 18:12:23.050: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=true. Elapsed: 47.513037ms Jan 29 18:12:23.050: INFO: Pod "metadata-proxy-v0.1-4xsdn" satisfied condition "running and ready, or succeeded" Jan 29 18:12:23.050: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:12:23.050: INFO: Getting external IP address for bootstrap-e2e-minion-group-s96g Jan 29 18:12:23.050: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-s96g(35.233.157.204:22) Jan 29 18:12:23.561: INFO: ssh prow@35.233.157.204:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 29 18:12:23.561: INFO: ssh prow@35.233.157.204:22: stdout: "" Jan 29 18:12:23.561: INFO: ssh prow@35.233.157.204:22: stderr: "" Jan 29 18:12:23.561: INFO: ssh prow@35.233.157.204:22: exit code: 0 Jan 29 18:12:23.561: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-s96g condition Ready to be false Jan 29 18:12:23.570: INFO: ssh prow@34.168.175.64:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 29 18:12:23.570: INFO: ssh prow@34.168.175.64:22: stdout: "" Jan 29 18:12:23.570: INFO: ssh prow@34.168.175.64:22: stderr: "" Jan 29 18:12:23.570: INFO: ssh prow@34.168.175.64:22: exit code: 0 Jan 29 18:12:23.570: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-dsnz condition Ready to be false Jan 29 18:12:23.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:23.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:25.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2.089823494s Jan 29 18:12:25.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:25.647: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:25.654: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:27.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4.089265325s Jan 29 18:12:27.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:27.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:27.698: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:29.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 6.089323695s Jan 29 18:12:29.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:29.736: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:29.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:31.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 8.089209655s Jan 29 18:12:31.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:31.778: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:31.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:33.094: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 10.091434454s Jan 29 18:12:33.094: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:33.821: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:33.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:35.096: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 12.093161368s Jan 29 18:12:35.096: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:35.864: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:35.873: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:37.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 14.090075866s Jan 29 18:12:37.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:37.908: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:37.916: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:39.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 16.089122196s Jan 29 18:12:39.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:39.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:39.960: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:41.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 18.089823618s Jan 29 18:12:41.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:41.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:42.003: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:43.096: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 20.093764363s Jan 29 18:12:43.097: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:44.037: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:44.046: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:45.209: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 22.206188231s Jan 29 18:12:45.209: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:46.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:46.088: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:47.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 24.089320986s Jan 29 18:12:47.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:48.123: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:48.132: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:49.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 26.089510553s Jan 29 18:12:49.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:50.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:50.176: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:51.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 28.089520116s Jan 29 18:12:51.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:52.210: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:52.219: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:53.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 30.09004384s Jan 29 18:12:53.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:54.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:54.261: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:55.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 32.089767183s Jan 29 18:12:55.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:56.296: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:56.304: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:57.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 34.089610549s Jan 29 18:12:57.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:58.340: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:58.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:59.096: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 36.093181164s Jan 29 18:12:59.096: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:00.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:00.392: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:01.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 38.089092437s Jan 29 18:13:01.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:02.427: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:02.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:03.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 40.08965695s Jan 29 18:13:03.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:04.470: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:04.480: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:05.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 42.090348362s Jan 29 18:13:05.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:06.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:06.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:07.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 44.089619307s Jan 29 18:13:07.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:08.556: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-s96g condition Ready to be true Jan 29 18:13:08.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:08.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:13:09.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 46.089777878s Jan 29 18:13:09.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:10.609: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:10.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:13:11.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 48.089577445s Jan 29 18:13:11.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:12.654: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:12.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:13:13.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 50.089484619s Jan 29 18:13:13.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:14.697: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-dsnz condition Ready to be true Jan 29 18:13:14.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:14.739: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:13:15.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 52.089954131s Jan 29 18:13:15.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:16.779: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:16.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:13:17.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 54.089419772s Jan 29 18:13:17.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:18.822: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:18.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:13:19.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 56.089674638s Jan 29 18:13:19.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:20.865: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:20.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:13:21.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 58.08977252s Jan 29 18:13:21.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:22.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:22.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:13:23.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.090007565s Jan 29 18:13:23.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:24.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:24.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:25.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.089348308s Jan 29 18:13:25.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:27.004: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:27.004: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:27.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.089753466s Jan 29 18:13:27.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:29.049: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:29.049: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:29.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.089127105s Jan 29 18:13:29.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:31.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.090802611s Jan 29 18:13:31.094: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:31.094: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:31.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:33.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.090021811s Jan 29 18:13:33.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:33.139: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:33.139: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:35.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.089927475s Jan 29 18:13:35.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:35.185: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:35.185: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:37.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.089237181s Jan 29 18:13:37.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:37.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:37.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:39.094: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.090957488s Jan 29 18:13:39.094: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:39.276: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:39.276: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:41.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.08942902s Jan 29 18:13:41.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:41.322: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:41.322: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:43.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.089854368s Jan 29 18:13:43.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:43.366: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:43.366: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:45.095: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.091906806s Jan 29 18:13:45.095: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:45.411: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:45.411: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:47.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.09067358s Jan 29 18:13:47.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:47.456: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:47.456: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:49.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.089681699s Jan 29 18:13:49.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:49.501: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:49.502: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:51.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.089871032s Jan 29 18:13:51.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:51.546: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:51.547: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:53.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.089818134s Jan 29 18:13:53.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:53.592: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:53.592: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:55.094: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.091601758s Jan 29 18:13:55.094: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:55.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:55.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:57.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.090689334s Jan 29 18:13:57.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:57.683: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:57.683: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:59.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.089714749s Jan 29 18:13:59.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:59.734: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:59.734: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:01.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.089252616s Jan 29 18:14:01.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:01.778: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:01.779: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:03.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.090236661s Jan 29 18:14:03.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:03.824: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:03.824: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:05.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.089328468s Jan 29 18:14:05.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:05.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:05.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:07.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.089083472s Jan 29 18:14:07.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:07.915: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:07.915: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:09.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.089250758s Jan 29 18:14:09.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:09.962: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:09.962: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:11.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.089646394s Jan 29 18:14:11.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:12.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:12.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:13.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.090020334s Jan 29 18:14:13.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:14.052: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:14.052: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:15.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.090069539s Jan 29 18:14:15.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:16.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:16.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:17.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.089059267s Jan 29 18:14:17.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:18.159: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:18.159: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:19.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.089530038s Jan 29 18:14:19.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:20.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:20.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:21.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.090149595s Jan 29 18:14:21.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:22.275: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:22.275: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:23.094: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.090990724s Jan 29 18:14:23.094: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:24.323: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:24.323: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:25.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.089281907s Jan 29 18:14:25.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:26.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:26.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:27.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.089346295s Jan 29 18:14:27.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:28.415: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:28.415: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:29.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.089859074s Jan 29 18:14:29.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:30.462: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:30.462: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:31.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.090069817s Jan 29 18:14:31.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:32.508: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:32.508: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:33.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.089435476s Jan 29 18:14:33.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:34.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:34.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:35.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.08939406s Jan 29 18:14:35.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:36.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:36.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:37.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.089914641s Jan 29 18:14:37.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:38.645: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:38.645: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:39.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.089208465s Jan 29 18:14:39.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:40.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:40.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:41.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.08991441s Jan 29 18:14:41.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:42.739: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:14:42.739: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:14:43.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.089547551s Jan 29 18:14:43.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:44.787: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:14:44.787: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:14:44.787: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8v287" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:14:44.787: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-4xsdn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:14:44.787: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-dsnz" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:14:44.787: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-s96g" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:14:44.834: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s96g": Phase="Running", Reason="", readiness=true. Elapsed: 46.487831ms Jan 29 18:14:44.834: INFO: Pod "metadata-proxy-v0.1-8v287": Phase="Running", Reason="", readiness=true. Elapsed: 46.604525ms Jan 29 18:14:44.834: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s96g" satisfied condition "running and ready, or succeeded" Jan 29 18:14:44.834: INFO: Pod "metadata-proxy-v0.1-8v287" satisfied condition "running and ready, or succeeded" Jan 29 18:14:44.834: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dsnz": Phase="Running", Reason="", readiness=true. Elapsed: 46.742254ms Jan 29 18:14:44.834: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dsnz" satisfied condition "running and ready, or succeeded" Jan 29 18:14:44.834: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:14:44.834: INFO: Reboot successful on node bootstrap-e2e-minion-group-dsnz Jan 29 18:14:44.834: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 46.892259ms Jan 29 18:14:44.834: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:14:45.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.090279413s Jan 29 18:14:45.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:46.877: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 2.090012837s Jan 29 18:14:46.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:14:47.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.089350426s Jan 29 18:14:47.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:48.877: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 4.089601862s Jan 29 18:14:48.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:14:49.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.09007608s Jan 29 18:14:49.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:50.878: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 6.090877017s Jan 29 18:14:50.878: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:14:51.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.089032824s Jan 29 18:14:51.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:52.878: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 8.090659818s Jan 29 18:14:52.878: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:14:53.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.08914892s Jan 29 18:14:53.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:54.878: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 10.091114402s Jan 29 18:14:54.878: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:14:55.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.08941953s Jan 29 18:14:55.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:56.878: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 12.090676811s Jan 29 18:14:56.878: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:14:57.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.090097255s Jan 29 18:14:57.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:14:58.877: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 14.089728503s Jan 29 18:14:58.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:14:59.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.090342194s Jan 29 18:14:59.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:00.877: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 16.089487283s Jan 29 18:15:00.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:15:01.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.089897081s Jan 29 18:15:01.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:02.878: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 18.091000377s Jan 29 18:15:02.878: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:15:03.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.089140259s Jan 29 18:15:03.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:04.879: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 20.092144479s Jan 29 18:15:04.879: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:15:05.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.08958301s Jan 29 18:15:05.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:06.877: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 22.090277376s Jan 29 18:15:06.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:15:07.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.089801434s Jan 29 18:15:07.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:08.877: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 24.089713464s Jan 29 18:15:08.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:15:09.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.08996553s Jan 29 18:15:09.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:10.878: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 26.090908979s Jan 29 18:15:10.878: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:15:11.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.089017507s Jan 29 18:15:11.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:12.878: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 28.090875763s Jan 29 18:15:12.878: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:15:13.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m50.089318298s Jan 29 18:15:13.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:14.877: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 30.089464108s Jan 29 18:15:14.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:15:15.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m52.089861964s Jan 29 18:15:15.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:16.880: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 32.093250917s Jan 29 18:15:16.880: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:15:17.095: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m54.092053599s Jan 29 18:15:17.095: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:18.878: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 34.090507751s Jan 29 18:15:18.878: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:15:19.094: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m56.091643895s Jan 29 18:15:19.094: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:20.877: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=false. Elapsed: 36.090368349s Jan 29 18:15:20.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-4xsdn' on 'bootstrap-e2e-minion-group-s96g' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:13:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:02 +0000 UTC }] Jan 29 18:15:21.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2m58.090015902s Jan 29 18:15:21.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:22.878: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=true. Elapsed: 38.090687046s Jan 29 18:15:22.878: INFO: Pod "metadata-proxy-v0.1-4xsdn" satisfied condition "running and ready, or succeeded" Jan 29 18:15:22.878: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:15:22.878: INFO: Reboot successful on node bootstrap-e2e-minion-group-s96g Jan 29 18:15:23.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m0.089512222s Jan 29 18:15:23.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:25.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m2.089762276s Jan 29 18:15:25.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:27.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m4.089415681s Jan 29 18:15:27.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:29.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m6.090131599s Jan 29 18:15:29.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:31.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m8.090550817s Jan 29 18:15:31.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:33.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m10.089970407s Jan 29 18:15:33.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:35.094: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.090841262s Jan 29 18:15:35.094: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:37.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.090143222s Jan 29 18:15:37.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:39.142: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.13944748s Jan 29 18:15:39.142: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:41.091: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.088767095s Jan 29 18:15:41.091: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:43.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.089492028s Jan 29 18:15:43.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:45.094: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.091000413s Jan 29 18:15:45.094: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:47.112: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.109442332s Jan 29 18:15:47.112: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:49.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.089325876s Jan 29 18:15:49.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:51.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.09013389s Jan 29 18:15:51.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:53.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.09054586s Jan 29 18:15:53.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:55.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.089770161s Jan 29 18:15:55.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:57.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.089608015s Jan 29 18:15:57.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:15:59.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.089169653s Jan 29 18:15:59.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:01.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.089919932s Jan 29 18:16:01.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:03.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.090520384s Jan 29 18:16:03.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:05.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.090597768s Jan 29 18:16:05.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:07.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.090336818s Jan 29 18:16:07.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:09.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.089666185s Jan 29 18:16:09.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:11.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.090512673s Jan 29 18:16:11.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:13.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.089524829s Jan 29 18:16:13.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:15.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.089396318s Jan 29 18:16:15.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:17.094: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.091162852s Jan 29 18:16:17.094: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:19.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.090315703s Jan 29 18:16:19.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:21.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.089618202s Jan 29 18:16:21.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:23.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.089742962s Jan 29 18:16:23.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:25.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.090591091s Jan 29 18:16:25.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:27.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.089600942s Jan 29 18:16:27.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:29.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.089580537s Jan 29 18:16:29.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:31.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.089012295s Jan 29 18:16:31.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:33.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.090627997s Jan 29 18:16:33.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:35.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.089907888s Jan 29 18:16:35.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:37.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.089781702s Jan 29 18:16:37.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:39.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.09004106s Jan 29 18:16:39.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:41.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.089637636s Jan 29 18:16:41.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:43.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.090747505s Jan 29 18:16:43.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:45.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.089481954s Jan 29 18:16:45.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:47.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.090168594s Jan 29 18:16:47.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:49.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.089594619s Jan 29 18:16:49.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:51.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.090606603s Jan 29 18:16:51.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:53.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.090295158s Jan 29 18:16:53.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:55.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.089220923s Jan 29 18:16:55.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:57.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.089905309s Jan 29 18:16:57.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:16:59.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.090106156s Jan 29 18:16:59.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:17:01.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.089833162s Jan 29 18:17:01.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:17:03.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.089897765s Jan 29 18:17:03.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:17:05.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.089566174s Jan 29 18:17:05.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:17:07.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.090252885s Jan 29 18:17:07.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:17:09.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.089211206s Jan 29 18:17:09.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:17:11.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.089821875s Jan 29 18:17:11.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:17:13.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.090027312s Jan 29 18:17:13.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:17:15.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.089424405s Jan 29 18:17:15.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:17:17.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.090041821s Jan 29 18:17:17.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:17:19.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.089863669s Jan 29 18:17:19.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:17:21.094: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.090895318s Jan 29 18:17:21.094: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on (Spec Runtime: 5m0.298s) test/e2e/cloud/gcp/reboot.go:115 In [It] (Node Runtime: 5m0.001s) test/e2e/cloud/gcp/reboot.go:115 Spec Goroutine goroutine 6577 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc001760348?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f8de01232c0?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f8de01232c0?, 0xc004a1a6c0}, {0x8147108?, 0xc002c6e4e0}, {0x7903e5f, 0x21e}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.6({0x7f8de01232c0?, 0xc004a1a6c0?}) test/e2e/cloud/gcp/reboot.go:133 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc004a1a6c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7574 [chan receive, 5 minutes] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x7f8de01232c0?, 0xc004a1a6c0}, {0x8147108?, 0xc002c6e4e0}, {0x76d190b, 0xb}, {0xc00482acc0, 0x4, 0x4}, 0x45d964b800, ...) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReadyOrSucceeded(...) test/e2e/framework/pod/resource.go:508 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f8de01232c0, 0xc004a1a6c0}, {0x8147108, 0xc002c6e4e0}, {0x7ffd0e8245ee, 0x3}, {0xc00474e060, 0x1f}, {0x7903e5f, 0x21e}) test/e2e/cloud/gcp/reboot.go:284 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 18:17:23.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.090141103s Jan 29 18:17:23.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:17:23.135: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.132445288s Jan 29 18:17:23.135: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:17:23.135: INFO: Pod kube-dns-autoscaler-5f6455f985-9smhj failed to be running and ready, or succeeded. Jan 29 18:17:23.135: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-dns-autoscaler-5f6455f985-9smhj kube-proxy-bootstrap-e2e-minion-group-9h8t metadata-proxy-v0.1-dnsxr volume-snapshot-controller-0] Jan 29 18:17:23.135: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-9smhj: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 17:57:14 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 18:03:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 18:04:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 17:57:14 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP: PodIPs:[] StartTime:2023-01-29 17:57:14 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 18:02:46 +0000 UTC,FinishedAt:2023-01-29 18:03:27 +0000 UTC,ContainerID:containerd://950ea0c01909be3e17165f748ab6c2d38a95a221cf18aba5f3ab884dd49d543c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:3 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://950ea0c01909be3e17165f748ab6c2d38a95a221cf18aba5f3ab884dd49d543c Started:0xc004910687}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 29 18:17:23.192: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-9smhj/autoscaler: Jan 29 18:17:23.192: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-9smhj/autoscaler: Jan 29 18:17:23.192: INFO: Node bootstrap-e2e-minion-group-9h8t failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 18:17:23.193 < Exit [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/29/23 18:17:23.193 (5m0.427s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 18:17:23.193 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 18:17:23.195 Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-l4n7p to bootstrap-e2e-minion-group-s96g Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 971.317987ms (971.327027ms including waiting) Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container coredns Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container coredns Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Unhealthy: Readiness probe failed: Get "http://10.64.0.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Unhealthy: Liveness probe failed: Get "http://10.64.0.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Killing: Stopping container coredns Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Killing: Container coredns failed liveness probe, will be restarted Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Failed: Error: failed to get sandbox container task: no running task found: task ee1da3c0beb16cde0b660c004353384fc19f8a2377b29f81fd02e1d3e5b59fb9 not found: not found Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-l4n7p Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container coredns Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container coredns Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-l4n7p: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-l4n7p Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-wbh56 to bootstrap-e2e-minion-group-9h8t Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.16200824s (3.162016014s including waiting) Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container coredns Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container coredns Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container coredns Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: Get "http://10.64.2.7:8181/ready": dial tcp 10.64.2.7:8181: connect: connection refused Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-wbh56_kube-system(dcc02a24-e34f-4aee-8574-9dff7dafcb7d) Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: Get "http://10.64.2.12:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container coredns Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container coredns Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container coredns Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: Get "http://10.64.2.22:8181/ready": dial tcp 10.64.2.22:8181: connect: connection refused Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: Get "http://10.64.2.26:8181/ready": dial tcp 10.64.2.26:8181: connect: connection refused Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-wbh56_kube-system(dcc02a24-e34f-4aee-8574-9dff7dafcb7d) Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-wbh56 Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container coredns Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container coredns Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container coredns Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-wbh56_kube-system(dcc02a24-e34f-4aee-8574-9dff7dafcb7d) Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f-wbh56: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-wbh56 Jan 29 18:17:23.254: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-l4n7p Jan 29 18:17:23.254: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 18:17:23.254: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 18:17:23.254: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 18:17:23.254: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 18:17:23.254: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 18:17:23.254: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 18:17:23.254: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 18:17:23.254: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 18:17:23.254: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 18:17:23.254: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 18:17:23.254: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 18:17:23.254: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_a4d04 became leader Jan 29 18:17:23.254: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_56ded became leader Jan 29 18:17:23.254: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_e42fb became leader Jan 29 18:17:23.254: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_84cfc became leader Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-bp4qk to bootstrap-e2e-minion-group-dsnz Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 599.898279ms (599.907942ms including waiting) Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "http://10.64.1.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Killing: Stopping container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Failed: Error: failed to get sandbox container task: no running task found: task 34af16972f1a15f7cd3de2359f5283edc4cb1afaaa95c05825bdfd8c875871a7 not found: not found Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "http://10.64.1.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-bp4qk: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-ksl2d to bootstrap-e2e-minion-group-s96g Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 589.803177ms (589.813561ms including waiting) Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Killing: Stopping container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Unhealthy: Liveness probe failed: Get "http://10.64.0.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Failed: Error: failed to get sandbox container task: no running task found: task 0ac9f140b699b69eb44f2572006896f1eae931c0983a4f39deffc55da2ac125d not found: not found Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-ksl2d_kube-system(42ec1e63-2728-4047-9c5d-36e785eb0141) Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-ksl2d: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-mn6xc to bootstrap-e2e-minion-group-9h8t Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 681.556582ms (681.572388ms including waiting) Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Liveness probe failed: Get "http://10.64.2.14:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Failed: Error: failed to get sandbox container task: no running task found: task 21c426eded0fc015f1ab3856fd138eba814545aef659a4d560d8a1cd814f6bd1 not found: not found Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-mn6xc_kube-system(fa7260b8-fd37-4dba-8214-14e74d09aef2) Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Liveness probe failed: Get "http://10.64.2.16:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-mn6xc_kube-system(fa7260b8-fd37-4dba-8214-14e74d09aef2) Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container konnectivity-agent Jan 29 18:17:23.254: INFO: event for konnectivity-agent-mn6xc: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-mn6xc_kube-system(fa7260b8-fd37-4dba-8214-14e74d09aef2) Jan 29 18:17:23.254: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-mn6xc Jan 29 18:17:23.254: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-ksl2d Jan 29 18:17:23.254: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-bp4qk Jan 29 18:17:23.254: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 18:17:23.254: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 18:17:23.254: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 18:17:23.254: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://127.0.0.1:8133/healthz": dial tcp 127.0.0.1:8133: connect: connection refused Jan 29 18:17:23.254: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 18:17:23.254: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 18:17:23.254: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 18:17:23.254: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 18:17:23.254: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 18:17:23.254: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 18:17:23.254: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 18:17:23.254: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 18:17:23.254: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 18:17:23.254: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 18:17:23.254: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 18:17:23.254: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:17:23.254: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 18:17:23.254: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 18:17:23.254: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 18:17:23.254: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.254: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 18:17:23.254: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8973717d-b4ea-4827-92b8-c82ef47ba807 became leader Jan 29 18:17:23.254: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_284471f6-f43b-49f3-ab98-bff9e88f88c0 became leader Jan 29 18:17:23.254: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_72fddcf4-b350-465c-9671-5552ed476fbc became leader Jan 29 18:17:23.254: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 18:17:23.254: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {default-scheduler } FailedScheduling: 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 18:17:23.254: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-9smhj to bootstrap-e2e-minion-group-9h8t Jan 29 18:17:23.254: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 18:17:23.255: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.941795269s (1.941803615s including waiting) Jan 29 18:17:23.255: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container autoscaler Jan 29 18:17:23.255: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container autoscaler Jan 29 18:17:23.255: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.255: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.255: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 18:17:23.255: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container autoscaler Jan 29 18:17:23.255: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container autoscaler Jan 29 18:17:23.255: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container autoscaler Jan 29 18:17:23.255: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-9smhj_kube-system(7269d21a-8222-4363-800b-6662fd8f87a9) Jan 29 18:17:23.255: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for kube-dns-autoscaler-5f6455f985-9smhj: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-9smhj Jan 29 18:17:23.255: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 18:17:23.255: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-9smhj Jan 29 18:17:23.255: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9h8t: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-9h8t_kube-system(aa9fac52dcd6313a298b129133e69882) Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Killing: Stopping container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-dsnz_kube-system(4f6c109bb0f65648d820240fca6d0382) Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-dsnz: {kubelet bootstrap-e2e-minion-group-dsnz} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Killing: Stopping container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 18:17:23.255: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s96g: {kubelet bootstrap-e2e-minion-group-s96g} Killing: Stopping container kube-proxy Jan 29 18:17:23.255: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 18:17:23.255: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 18:17:23.255: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 18:17:23.255: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 18:17:23.255: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 18:17:23.255: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_be71f07a-21fc-4f39-aa70-aeae362a8313 became leader Jan 29 18:17:23.255: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_0d5a2b19-5601-408b-a47c-76493d5996e8 became leader Jan 29 18:17:23.255: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_f6e75d31-8d47-43d3-83a7-d2209fd23f64 became leader Jan 29 18:17:23.255: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_11c8be03-c1cc-493a-8694-faac9b6108ed became leader Jan 29 18:17:23.255: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ca187ae7-92eb-4516-879e-5110d01cd353 became leader Jan 29 18:17:23.255: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c453d97a-908a-4657-a3cf-7cbe5f47ead4 became leader Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {default-scheduler } FailedScheduling: 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-xnppl to bootstrap-e2e-minion-group-9h8t Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 2.153078114s (2.153091476s including waiting) Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container default-http-backend Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container default-http-backend Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Liveness probe failed: Get "http://10.64.2.6:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container default-http-backend Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container default-http-backend Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container default-http-backend Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-xnppl Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99-xnppl: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container default-http-backend Jan 29 18:17:23.255: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-xnppl Jan 29 18:17:23.255: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 18:17:23.255: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 18:17:23.255: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 18:17:23.255: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 18:17:23.255: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 18:17:23.255: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 18:17:23.255: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-4xsdn to bootstrap-e2e-minion-group-s96g Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 739.554608ms (739.575415ms including waiting) Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container metadata-proxy Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container metadata-proxy Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.702613211s (1.702626118s including waiting) Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container prometheus-to-sd-exporter Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container prometheus-to-sd-exporter Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container metadata-proxy Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container metadata-proxy Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container prometheus-to-sd-exporter Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container prometheus-to-sd-exporter Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container metadata-proxy Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container metadata-proxy Jan 29 18:17:23.255: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Created: Created container prometheus-to-sd-exporter Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} Started: Started container prometheus-to-sd-exporter Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-4xsdn: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-4xsdn: {kubelet bootstrap-e2e-minion-group-s96g} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-8v287 to bootstrap-e2e-minion-group-dsnz Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 739.759923ms (739.769687ms including waiting) Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metadata-proxy Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metadata-proxy Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.535493899s (1.535504014s including waiting) Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container prometheus-to-sd-exporter Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container prometheus-to-sd-exporter Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metadata-proxy Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metadata-proxy Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container prometheus-to-sd-exporter Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container prometheus-to-sd-exporter Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metadata-proxy Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metadata-proxy Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container prometheus-to-sd-exporter Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container prometheus-to-sd-exporter Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-8v287: {kubelet bootstrap-e2e-minion-group-dsnz} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-dnsxr to bootstrap-e2e-minion-group-9h8t Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 703.801985ms (703.819535ms including waiting) Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container metadata-proxy Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container metadata-proxy Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.694396463s (1.694410106s including waiting) Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container prometheus-to-sd-exporter Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container prometheus-to-sd-exporter Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container metadata-proxy Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container metadata-proxy Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container prometheus-to-sd-exporter Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container prometheus-to-sd-exporter Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container metadata-proxy Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container metadata-proxy Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container prometheus-to-sd-exporter Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-dnsxr: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container prometheus-to-sd-exporter Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-glg4c: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-glg4c to bootstrap-e2e-master Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 702.853427ms (702.859315ms including waiting) Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.958774245s (1.958780937s including waiting) Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1-glg4c: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-4xsdn Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-8v287 Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-dnsxr Jan 29 18:17:23.256: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-glg4c Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {default-scheduler } FailedScheduling: 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-gpgw8 to bootstrap-e2e-minion-group-9h8t Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.415068214s (3.415086682s including waiting) Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container metrics-server Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container metrics-server Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 937.450811ms (937.460293ms including waiting) Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container metrics-server-nanny Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container metrics-server-nanny Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container metrics-server Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container metrics-server-nanny Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c-gpgw8: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-gpgw8 Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-gpgw8 Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-fpt69 to bootstrap-e2e-minion-group-dsnz Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.219802168s (1.219816016s including waiting) Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metrics-server Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metrics-server Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 923.70697ms (923.715586ms including waiting) Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metrics-server-nanny Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metrics-server-nanny Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Killing: Stopping container metrics-server-nanny Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Killing: Stopping container metrics-server Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metrics-server Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metrics-server Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Created: Created container metrics-server-nanny Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Started: Started container metrics-server-nanny Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Readiness probe failed: Get "https://10.64.1.8:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "https://10.64.1.8:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Liveness probe failed: Get "https://10.64.1.8:10250/livez": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {kubelet bootstrap-e2e-minion-group-dsnz} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-fpt69 Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9-fpt69: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-fpt69 Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-fpt69 Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 18:17:23.256: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 18:17:23.256: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 18:17:23.256: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. Jan 29 18:17:23.256: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-9h8t Jan 29 18:17:23.256: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 18:17:23.256: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.238241255s (2.238248989s including waiting) Jan 29 18:17:23.256: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container volume-snapshot-controller Jan 29 18:17:23.256: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container volume-snapshot-controller Jan 29 18:17:23.256: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container volume-snapshot-controller Jan 29 18:17:23.256: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.256: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 18:17:23.256: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(15aa184f-ad8f-486f-b5cc-f97b406e1a24) Jan 29 18:17:23.256: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.256: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 18:17:23.256: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.256: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 18:17:23.256: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container volume-snapshot-controller Jan 29 18:17:23.257: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container volume-snapshot-controller Jan 29 18:17:23.257: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container volume-snapshot-controller Jan 29 18:17:23.257: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(15aa184f-ad8f-486f-b5cc-f97b406e1a24) Jan 29 18:17:23.257: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 18:17:23.257: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 18:17:23.257: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Created: Created container volume-snapshot-controller Jan 29 18:17:23.257: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 18:17:23.257: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Started: Started container volume-snapshot-controller Jan 29 18:17:23.257: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} Killing: Stopping container volume-snapshot-controller Jan 29 18:17:23.257: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-9h8t} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(15aa184f-ad8f-486f-b5cc-f97b406e1a24) Jan 29 18:17:23.257: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 18:17:23.257 (63ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 18:17:23.257 Jan 29 18:17:23.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 18:17:23.303 (46ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 18:17:23.303 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 18:17:23.303 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 18:17:23.303 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 18:17:23.309 STEP: Collecting events from namespace "reboot-7713". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 18:17:23.309 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 18:17:23.35 Jan 29 18:17:23.392: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 18:17:23.392: INFO: Jan 29 18:17:23.436: INFO: Logging node info for node bootstrap-e2e-master Jan 29 18:17:23.478: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master de4d0d91-417f-4d9e-8e88-821fcf72cad3 2832 0 2023-01-29 17:57:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 17:57:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 17:57:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 17:57:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 18:12:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-19/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858378752 0} {<nil>} 3767948Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596234752 0} {<nil>} 3511948Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 17:57:24 +0000 UTC,LastTransitionTime:2023-01-29 17:57:24 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:12:49 +0000 UTC,LastTransitionTime:2023-01-29 17:57:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:12:49 +0000 UTC,LastTransitionTime:2023-01-29 17:57:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:12:49 +0000 UTC,LastTransitionTime:2023-01-29 17:57:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:12:49 +0000 UTC,LastTransitionTime:2023-01-29 17:57:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.105.63.53,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-19.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-19.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:284bab09bf08f5292691fdfc4343523f,SystemUUID:284bab09-bf08-f529-2691-fdfc4343523f,BootID:33ca137b-9efb-4fae-bd1d-b736b2efdf21,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 18:17:23.479: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 18:17:23.526: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 18:17:23.584: INFO: metadata-proxy-v0.1-glg4c started at 2023-01-29 17:57:10 +0000 UTC (0+2 container statuses recorded) Jan 29 18:17:23.584: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 18:17:23.584: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 18:17:23.584: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:23.584: INFO: Container konnectivity-server-container ready: true, restart count 2 Jan 29 18:17:23.584: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:23.584: INFO: Container kube-controller-manager ready: true, restart count 4 Jan 29 18:17:23.584: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 17:56:39 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:23.584: INFO: Container kube-addon-manager ready: true, restart count 4 Jan 29 18:17:23.584: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:23.584: INFO: Container etcd-container ready: true, restart count 1 Jan 29 18:17:23.584: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 17:56:39 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:23.584: INFO: Container l7-lb-controller ready: false, restart count 6 Jan 29 18:17:23.584: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:23.584: INFO: Container kube-apiserver ready: true, restart count 1 Jan 29 18:17:23.584: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:23.584: INFO: Container kube-scheduler ready: true, restart count 5 Jan 29 18:17:23.584: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 17:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:23.584: INFO: Container etcd-container ready: true, restart count 2 Jan 29 18:17:23.793: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 18:17:23.793: INFO: Logging node info for node bootstrap-e2e-minion-group-9h8t Jan 29 18:17:23.837: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-9h8t a73caa79-09be-4952-9d8a-63d5ff2cf1d1 2989 0 2023-01-29 17:57:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-9h8t kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 17:57:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:03:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 18:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 18:14:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 18:14:30 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-19/us-west1-b/bootstrap-e2e-minion-group-9h8t,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 18:14:17 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 18:14:17 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 18:14:17 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 18:14:17 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 18:14:17 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 18:14:17 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 18:14:17 +0000 UTC,LastTransitionTime:2023-01-29 18:04:14 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 17:57:14 +0000 UTC,LastTransitionTime:2023-01-29 17:57:14 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:14:30 +0000 UTC,LastTransitionTime:2023-01-29 18:04:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:14:30 +0000 UTC,LastTransitionTime:2023-01-29 18:04:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:14:30 +0000 UTC,LastTransitionTime:2023-01-29 18:04:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:14:30 +0000 UTC,LastTransitionTime:2023-01-29 18:04:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.247.75.88,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-9h8t.c.k8s-boskos-gce-project-19.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-9h8t.c.k8s-boskos-gce-project-19.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5df247ae00a0f0d760b6034aea558213,SystemUUID:5df247ae-00a0-f0d7-60b6-034aea558213,BootID:856ae91e-496d-4106-a705-abcfc446e6ec,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 18:17:23.838: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-9h8t Jan 29 18:17:23.884: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-9h8t Jan 29 18:17:23.938: INFO: coredns-6846b5b5f-wbh56 started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:23.938: INFO: Container coredns ready: true, restart count 12 Jan 29 18:17:23.938: INFO: metadata-proxy-v0.1-dnsxr started at 2023-01-29 17:57:05 +0000 UTC (0+2 container statuses recorded) Jan 29 18:17:23.938: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 18:17:23.938: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 18:17:23.938: INFO: konnectivity-agent-mn6xc started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:23.938: INFO: Container konnectivity-agent ready: true, restart count 8 Jan 29 18:17:23.938: INFO: kube-proxy-bootstrap-e2e-minion-group-9h8t started at 2023-01-29 17:57:04 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:23.938: INFO: Container kube-proxy ready: false, restart count 7 Jan 29 18:17:23.938: INFO: l7-default-backend-8549d69d99-xnppl started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:23.938: INFO: Container default-http-backend ready: true, restart count 3 Jan 29 18:17:23.938: INFO: volume-snapshot-controller-0 started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:23.938: INFO: Container volume-snapshot-controller ready: false, restart count 11 Jan 29 18:17:23.938: INFO: kube-dns-autoscaler-5f6455f985-9smhj started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:23.938: INFO: Container autoscaler ready: false, restart count 3 Jan 29 18:17:24.133: INFO: Latency metrics for node bootstrap-e2e-minion-group-9h8t Jan 29 18:17:24.133: INFO: Logging node info for node bootstrap-e2e-minion-group-dsnz Jan 29 18:17:24.176: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-dsnz 1713c218-b219-4b0d-b0ae-fbc51ee95790 3026 0 2023-01-29 17:57:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-dsnz kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 17:57:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:13:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 18:14:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 18:14:39 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 18:14:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-19/us-west1-b/bootstrap-e2e-minion-group-dsnz,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 18:14:41 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 18:14:41 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 18:14:41 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 18:14:41 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 18:14:41 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 18:14:41 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 18:14:41 +0000 UTC,LastTransitionTime:2023-01-29 18:03:48 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 17:57:14 +0000 UTC,LastTransitionTime:2023-01-29 17:57:14 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:14:39 +0000 UTC,LastTransitionTime:2023-01-29 18:14:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:14:39 +0000 UTC,LastTransitionTime:2023-01-29 18:14:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:14:39 +0000 UTC,LastTransitionTime:2023-01-29 18:14:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:14:39 +0000 UTC,LastTransitionTime:2023-01-29 18:14:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.175.64,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-dsnz.c.k8s-boskos-gce-project-19.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-dsnz.c.k8s-boskos-gce-project-19.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d9c2682ffdf61476c94207a696b50d63,SystemUUID:d9c2682f-fdf6-1476-c942-07a696b50d63,BootID:b1fc4976-52e3-4021-bea5-d719e168a208,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 18:17:24.179: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-dsnz Jan 29 18:17:24.226: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-dsnz Jan 29 18:17:24.288: INFO: kube-proxy-bootstrap-e2e-minion-group-dsnz started at 2023-01-29 17:57:03 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:24.288: INFO: Container kube-proxy ready: true, restart count 4 Jan 29 18:17:24.288: INFO: metadata-proxy-v0.1-8v287 started at 2023-01-29 17:57:04 +0000 UTC (0+2 container statuses recorded) Jan 29 18:17:24.288: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 18:17:24.288: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 18:17:24.288: INFO: konnectivity-agent-bp4qk started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:24.288: INFO: Container konnectivity-agent ready: false, restart count 3 Jan 29 18:17:24.288: INFO: metrics-server-v0.5.2-867b8754b9-fpt69 started at 2023-01-29 17:57:37 +0000 UTC (0+2 container statuses recorded) Jan 29 18:17:24.288: INFO: Container metrics-server ready: false, restart count 4 Jan 29 18:17:24.288: INFO: Container metrics-server-nanny ready: false, restart count 3 Jan 29 18:17:24.464: INFO: Latency metrics for node bootstrap-e2e-minion-group-dsnz Jan 29 18:17:24.464: INFO: Logging node info for node bootstrap-e2e-minion-group-s96g Jan 29 18:17:24.507: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-s96g 7998b680-fc29-469b-9d21-811123638809 3025 0 2023-01-29 17:57:02 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-s96g kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 17:57:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 18:13:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 18:14:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 18:14:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 18:14:39 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-19/us-west1-b/bootstrap-e2e-minion-group-s96g,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 18:14:35 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 18:14:35 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 18:14:35 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 18:14:35 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 18:14:35 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 18:14:35 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 18:14:35 +0000 UTC,LastTransitionTime:2023-01-29 18:03:51 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 17:57:14 +0000 UTC,LastTransitionTime:2023-01-29 17:57:14 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 18:14:39 +0000 UTC,LastTransitionTime:2023-01-29 18:14:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 18:14:39 +0000 UTC,LastTransitionTime:2023-01-29 18:14:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 18:14:39 +0000 UTC,LastTransitionTime:2023-01-29 18:14:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 18:14:39 +0000 UTC,LastTransitionTime:2023-01-29 18:14:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.157.204,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-s96g.c.k8s-boskos-gce-project-19.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-s96g.c.k8s-boskos-gce-project-19.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:01be44620799a91cfe8a68e6a28b1e90,SystemUUID:01be4462-0799-a91c-fe8a-68e6a28b1e90,BootID:2d74e887-840c-4a68-8c50-3b4295ae1098,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 18:17:24.508: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-s96g Jan 29 18:17:24.554: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-s96g Jan 29 18:17:24.618: INFO: kube-proxy-bootstrap-e2e-minion-group-s96g started at 2023-01-29 17:57:03 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:24.618: INFO: Container kube-proxy ready: true, restart count 4 Jan 29 18:17:24.618: INFO: metadata-proxy-v0.1-4xsdn started at 2023-01-29 17:57:03 +0000 UTC (0+2 container statuses recorded) Jan 29 18:17:24.618: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 18:17:24.618: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 18:17:24.618: INFO: konnectivity-agent-ksl2d started at 2023-01-29 17:57:14 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:24.618: INFO: Container konnectivity-agent ready: false, restart count 3 Jan 29 18:17:24.618: INFO: coredns-6846b5b5f-l4n7p started at 2023-01-29 17:57:19 +0000 UTC (0+1 container statuses recorded) Jan 29 18:17:24.618: INFO: Container coredns ready: false, restart count 2 Jan 29 18:17:24.811: INFO: Latency metrics for node bootstrap-e2e-minion-group-s96g END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 18:17:24.811 (1.502s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 18:17:24.811 (1.508s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 18:17:24.811 STEP: Destroying namespace "reboot-7713" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 18:17:24.811 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 18:17:28.125 (3.314s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 18:17:28.127 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 18:17:28.127 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sswitching\soff\sthe\snetwork\sinterface\sand\sensure\sthey\sfunction\supon\sswitch\son$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 18:17:23.193
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 18:12:22.47 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 18:12:22.47 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 18:12:22.47 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 18:12:22.47 Jan 29 18:12:22.470: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 18:12:22.472 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 18:12:22.597 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 18:12:22.677 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 18:12:22.767 (296ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 18:12:22.767 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 18:12:22.767 (0s) > Enter [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/29/23 18:12:22.767 Jan 29 18:12:22.913: INFO: Getting bootstrap-e2e-minion-group-s96g Jan 29 18:12:22.913: INFO: Getting bootstrap-e2e-minion-group-dsnz Jan 29 18:12:22.913: INFO: Getting bootstrap-e2e-minion-group-9h8t Jan 29 18:12:22.959: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-dsnz condition Ready to be true Jan 29 18:12:22.959: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-9h8t condition Ready to be true Jan 29 18:12:22.959: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-s96g condition Ready to be true Jan 29 18:12:23.001: INFO: Node bootstrap-e2e-minion-group-dsnz has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:12:23.001: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:12:23.001: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8v287" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:12:23.001: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-dsnz" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:12:23.003: INFO: Node bootstrap-e2e-minion-group-9h8t has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-9smhj kube-proxy-bootstrap-e2e-minion-group-9h8t metadata-proxy-v0.1-dnsxr volume-snapshot-controller-0] Jan 29 18:12:23.003: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-9smhj kube-proxy-bootstrap-e2e-minion-group-9h8t metadata-proxy-v0.1-dnsxr volume-snapshot-controller-0] Jan 29 18:12:23.003: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:12:23.003: INFO: Node bootstrap-e2e-minion-group-s96g has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:12:23.003: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:12:23.003: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-4xsdn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:12:23.003: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-9smhj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:12:23.003: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-s96g" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:12:23.003: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-9h8t" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:12:23.003: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-dnsxr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 18:12:23.046: INFO: Pod "metadata-proxy-v0.1-8v287": Phase="Running", Reason="", readiness=true. Elapsed: 44.670697ms Jan 29 18:12:23.046: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dsnz": Phase="Running", Reason="", readiness=true. Elapsed: 44.719588ms Jan 29 18:12:23.046: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-dsnz" satisfied condition "running and ready, or succeeded" Jan 29 18:12:23.046: INFO: Pod "metadata-proxy-v0.1-8v287" satisfied condition "running and ready, or succeeded" Jan 29 18:12:23.046: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-dsnz metadata-proxy-v0.1-8v287] Jan 29 18:12:23.046: INFO: Getting external IP address for bootstrap-e2e-minion-group-dsnz Jan 29 18:12:23.046: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-dsnz(34.168.175.64:22) Jan 29 18:12:23.049: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 46.372237ms Jan 29 18:12:23.049: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 18:12:23.050: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 47.292369ms Jan 29 18:12:23.050: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9h8t": Phase="Running", Reason="", readiness=true. Elapsed: 47.156307ms Jan 29 18:12:23.050: INFO: Pod "metadata-proxy-v0.1-dnsxr": Phase="Running", Reason="", readiness=true. Elapsed: 47.109765ms Jan 29 18:12:23.050: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9h8t" satisfied condition "running and ready, or succeeded" Jan 29 18:12:23.050: INFO: Pod "metadata-proxy-v0.1-dnsxr" satisfied condition "running and ready, or succeeded" Jan 29 18:12:23.050: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:23.050: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s96g": Phase="Running", Reason="", readiness=true. Elapsed: 47.346638ms Jan 29 18:12:23.050: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s96g" satisfied condition "running and ready, or succeeded" Jan 29 18:12:23.050: INFO: Pod "metadata-proxy-v0.1-4xsdn": Phase="Running", Reason="", readiness=true. Elapsed: 47.513037ms Jan 29 18:12:23.050: INFO: Pod "metadata-proxy-v0.1-4xsdn" satisfied condition "running and ready, or succeeded" Jan 29 18:12:23.050: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-s96g metadata-proxy-v0.1-4xsdn] Jan 29 18:12:23.050: INFO: Getting external IP address for bootstrap-e2e-minion-group-s96g Jan 29 18:12:23.050: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-s96g(35.233.157.204:22) Jan 29 18:12:23.561: INFO: ssh prow@35.233.157.204:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 29 18:12:23.561: INFO: ssh prow@35.233.157.204:22: stdout: "" Jan 29 18:12:23.561: INFO: ssh prow@35.233.157.204:22: stderr: "" Jan 29 18:12:23.561: INFO: ssh prow@35.233.157.204:22: exit code: 0 Jan 29 18:12:23.561: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-s96g condition Ready to be false Jan 29 18:12:23.570: INFO: ssh prow@34.168.175.64:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 29 18:12:23.570: INFO: ssh prow@34.168.175.64:22: stdout: "" Jan 29 18:12:23.570: INFO: ssh prow@34.168.175.64:22: stderr: "" Jan 29 18:12:23.570: INFO: ssh prow@34.168.175.64:22: exit code: 0 Jan 29 18:12:23.570: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-dsnz condition Ready to be false Jan 29 18:12:23.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:23.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:25.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 2.089823494s Jan 29 18:12:25.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:25.647: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:25.654: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:27.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 4.089265325s Jan 29 18:12:27.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:27.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:27.698: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:29.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 6.089323695s Jan 29 18:12:29.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:29.736: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:29.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:31.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 8.089209655s Jan 29 18:12:31.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:31.778: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:31.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:33.094: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 10.091434454s Jan 29 18:12:33.094: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:33.821: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:33.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:35.096: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 12.093161368s Jan 29 18:12:35.096: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:35.864: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:35.873: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:37.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 14.090075866s Jan 29 18:12:37.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:37.908: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:37.916: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:39.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 16.089122196s Jan 29 18:12:39.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:39.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:39.960: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:41.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 18.089823618s Jan 29 18:12:41.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:41.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:42.003: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:43.096: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 20.093764363s Jan 29 18:12:43.097: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:44.037: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:44.046: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:45.209: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 22.206188231s Jan 29 18:12:45.209: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:46.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:46.088: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:47.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 24.089320986s Jan 29 18:12:47.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:48.123: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:48.132: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:49.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 26.089510553s Jan 29 18:12:49.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:50.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:50.176: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:51.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 28.089520116s Jan 29 18:12:51.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:52.210: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:52.219: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:53.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 30.09004384s Jan 29 18:12:53.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:54.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:54.261: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:55.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 32.089767183s Jan 29 18:12:55.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:56.296: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:56.304: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:57.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 34.089610549s Jan 29 18:12:57.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:12:58.340: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:58.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:12:59.096: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 36.093181164s Jan 29 18:12:59.096: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:00.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:00.392: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:01.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 38.089092437s Jan 29 18:13:01.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:02.427: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:02.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:03.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 40.08965695s Jan 29 18:13:03.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:04.470: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:04.480: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:05.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 42.090348362s Jan 29 18:13:05.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:06.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:06.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:07.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 44.089619307s Jan 29 18:13:07.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:08.556: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-s96g condition Ready to be true Jan 29 18:13:08.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:08.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:13:09.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 46.089777878s Jan 29 18:13:09.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:10.609: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:10.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:13:11.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 48.089577445s Jan 29 18:13:11.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:12.654: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 18:13:12.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:13:13.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 50.089484619s Jan 29 18:13:13.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:14.697: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-dsnz condition Ready to be true Jan 29 18:13:14.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:14.739: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:13:15.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 52.089954131s Jan 29 18:13:15.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:16.779: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:16.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:13:17.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 54.089419772s Jan 29 18:13:17.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:18.822: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:18.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:13:19.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 56.089674638s Jan 29 18:13:19.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:20.865: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:20.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:13:21.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 58.08977252s Jan 29 18:13:21.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:22.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:22.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 18:13:23.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.090007565s Jan 29 18:13:23.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:24.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:24.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:25.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.089348308s Jan 29 18:13:25.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:27.004: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:27.004: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:27.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.089753466s Jan 29 18:13:27.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC }] Jan 29 18:13:29.049: INFO: Condition Ready of node bootstrap-e2e-minion-group-dsnz is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:13 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:23 +0000 UTC}]. Failure Jan 29 18:13:29.049: INFO: Condition Ready of node bootstrap-e2e-minion-group-s96g is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 18:13:08 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 18:13:13 +0000 UTC}]. Failure Jan 29 18:13:29.092: INFO: Pod "kube-dns-autoscaler-5f6455f985-9smhj": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.089127105s Jan 29 18:13:29.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-9smhj' on 'bootstrap-e2e-minion-group-9h8t' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 17:57:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:03:33 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 18:04:15 +0000 UTC ContainersNotReady containers with unready status: