go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 04:38:20.567from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 04:33:18.214 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 04:33:18.214 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 04:33:18.214 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/30/23 04:33:18.214 Jan 30 04:33:18.214: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/30/23 04:33:18.215 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/30/23 04:33:18.344 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/30/23 04:33:18.427 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 04:33:18.508 (294ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 04:33:18.508 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 04:33:18.508 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/30/23 04:33:18.508 Jan 30 04:33:18.606: INFO: Getting bootstrap-e2e-minion-group-2w7z Jan 30 04:33:18.651: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-2w7z condition Ready to be true Jan 30 04:33:18.655: INFO: Getting bootstrap-e2e-minion-group-pr8s Jan 30 04:33:18.655: INFO: Getting bootstrap-e2e-minion-group-8989 Jan 30 04:33:18.693: INFO: Node bootstrap-e2e-minion-group-2w7z has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-2w7z metadata-proxy-v0.1-hhh7h] Jan 30 04:33:18.693: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-2w7z metadata-proxy-v0.1-hhh7h] Jan 30 04:33:18.693: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-hhh7h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:33:18.694: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-2w7z" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:33:18.701: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-pr8s condition Ready to be true Jan 30 04:33:18.701: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-8989 condition Ready to be true Jan 30 04:33:18.737: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-2w7z": Phase="Running", Reason="", readiness=true. Elapsed: 43.331156ms Jan 30 04:33:18.737: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-2w7z" satisfied condition "running and ready, or succeeded" Jan 30 04:33:18.737: INFO: Pod "metadata-proxy-v0.1-hhh7h": Phase="Running", Reason="", readiness=true. Elapsed: 43.648483ms Jan 30 04:33:18.737: INFO: Pod "metadata-proxy-v0.1-hhh7h" satisfied condition "running and ready, or succeeded" Jan 30 04:33:18.737: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-2w7z metadata-proxy-v0.1-hhh7h] Jan 30 04:33:18.737: INFO: Getting external IP address for bootstrap-e2e-minion-group-2w7z Jan 30 04:33:18.737: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-2w7z(34.83.14.121:22) Jan 30 04:33:18.745: INFO: Node bootstrap-e2e-minion-group-pr8s has 4 assigned pods with no liveness probes: [metadata-proxy-v0.1-wqvwp volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-vcng2 kube-proxy-bootstrap-e2e-minion-group-pr8s] Jan 30 04:33:18.745: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-wqvwp volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-vcng2 kube-proxy-bootstrap-e2e-minion-group-pr8s] Jan 30 04:33:18.745: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-pr8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:33:18.745: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:33:18.745: INFO: Node bootstrap-e2e-minion-group-8989 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-8989 metadata-proxy-v0.1-27bcp] Jan 30 04:33:18.745: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-8989 metadata-proxy-v0.1-27bcp] Jan 30 04:33:18.745: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-wqvwp" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:33:18.745: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-27bcp" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:33:18.745: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-vcng2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:33:18.745: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-8989" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:33:18.792: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-pr8s": Phase="Running", Reason="", readiness=true. Elapsed: 47.266601ms Jan 30 04:33:18.792: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-pr8s" satisfied condition "running and ready, or succeeded" Jan 30 04:33:18.792: INFO: Pod "metadata-proxy-v0.1-wqvwp": Phase="Running", Reason="", readiness=true. Elapsed: 47.203184ms Jan 30 04:33:18.792: INFO: Pod "metadata-proxy-v0.1-wqvwp" satisfied condition "running and ready, or succeeded" Jan 30 04:33:18.792: INFO: Pod "metadata-proxy-v0.1-27bcp": Phase="Running", Reason="", readiness=true. Elapsed: 47.252409ms Jan 30 04:33:18.792: INFO: Pod "metadata-proxy-v0.1-27bcp" satisfied condition "running and ready, or succeeded" Jan 30 04:33:18.792: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 47.276134ms Jan 30 04:33:18.792: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:18.795: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8989": Phase="Running", Reason="", readiness=true. Elapsed: 49.639319ms Jan 30 04:33:18.795: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8989" satisfied condition "running and ready, or succeeded" Jan 30 04:33:18.795: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 49.817134ms Jan 30 04:33:18.795: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-8989 metadata-proxy-v0.1-27bcp] Jan 30 04:33:18.795: INFO: Getting external IP address for bootstrap-e2e-minion-group-8989 Jan 30 04:33:18.795: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-8989(34.145.88.234:22) Jan 30 04:33:18.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:19.269: INFO: ssh prow@34.83.14.121:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 30 04:33:19.269: INFO: ssh prow@34.83.14.121:22: stdout: "" Jan 30 04:33:19.269: INFO: ssh prow@34.83.14.121:22: stderr: "" Jan 30 04:33:19.269: INFO: ssh prow@34.83.14.121:22: exit code: 0 Jan 30 04:33:19.269: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-2w7z condition Ready to be false Jan 30 04:33:19.312: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:19.336: INFO: ssh prow@34.145.88.234:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 30 04:33:19.336: INFO: ssh prow@34.145.88.234:22: stdout: "" Jan 30 04:33:19.336: INFO: ssh prow@34.145.88.234:22: stderr: "" Jan 30 04:33:19.336: INFO: ssh prow@34.145.88.234:22: exit code: 0 Jan 30 04:33:19.336: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-8989 condition Ready to be false Jan 30 04:33:19.379: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:20.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2.089936664s Jan 30 04:33:20.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:20.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.094028154s Jan 30 04:33:20.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:21.356: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:21.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:22.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4.089882464s Jan 30 04:33:22.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:22.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.092548238s Jan 30 04:33:22.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:23.403: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:23.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:24.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 6.090423891s Jan 30 04:33:24.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:24.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.092865096s Jan 30 04:33:24.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:25.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:25.510: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:26.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 8.090006177s Jan 30 04:33:26.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:26.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.092635922s Jan 30 04:33:26.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:27.519: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:27.554: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:28.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 10.089884081s Jan 30 04:33:28.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:28.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.092206235s Jan 30 04:33:28.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:29.563: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:29.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:30.838: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 12.093026942s Jan 30 04:33:30.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:30.846: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.100904617s Jan 30 04:33:30.846: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:31.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:31.640: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:32.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 14.090029438s Jan 30 04:33:32.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:32.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.091967254s Jan 30 04:33:32.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:33.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:33.683: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:34.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 16.089718048s Jan 30 04:33:34.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:34.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.092025652s Jan 30 04:33:34.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:35.694: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:35.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:36.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 18.089382876s Jan 30 04:33:36.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:36.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.091767543s Jan 30 04:33:36.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:37.737: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:37.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:38.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 20.089666588s Jan 30 04:33:38.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:38.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.092154208s Jan 30 04:33:38.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:39.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:39.815: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:40.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 22.091480838s Jan 30 04:33:40.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:40.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.093675374s Jan 30 04:33:40.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:41.824: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:41.857: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:42.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 24.08993271s Jan 30 04:33:42.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:42.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.091769529s Jan 30 04:33:42.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:43.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:43.901: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:44.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 26.089458883s Jan 30 04:33:44.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:44.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.091131158s Jan 30 04:33:44.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:45.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:45.946: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:46.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 28.089462174s Jan 30 04:33:46.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:46.840: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.094930449s Jan 30 04:33:46.840: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:47.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:47.990: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:48.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 30.089269025s Jan 30 04:33:48.834: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:48.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.091730521s Jan 30 04:33:48.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:50.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:50.034: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:50.838: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 32.092360915s Jan 30 04:33:50.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:50.848: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.103372172s Jan 30 04:33:50.848: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:52.043: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:52.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:52.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 34.090780232s Jan 30 04:33:52.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:52.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.09249274s Jan 30 04:33:52.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:54.088: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:54.120: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:54.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 36.090421308s Jan 30 04:33:54.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:54.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.092195478s Jan 30 04:33:54.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:56.133: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:56.167: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:56.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 38.089750203s Jan 30 04:33:56.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:56.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.092129059s Jan 30 04:33:56.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:58.186: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:58.210: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:58.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 40.089684104s Jan 30 04:33:58.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:58.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.092094046s Jan 30 04:33:58.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:00.229: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:00.253: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:00.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 42.089696922s Jan 30 04:34:00.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:00.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.092283232s Jan 30 04:34:00.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:02.274: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:02.296: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:02.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 44.088749098s Jan 30 04:34:02.834: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:02.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.091779976s Jan 30 04:34:02.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:04.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:04.340: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:04.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 46.091610014s Jan 30 04:34:04.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:04.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.093244183s Jan 30 04:34:04.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:06.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:06.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:06.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 48.089829917s Jan 30 04:34:06.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:06.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.091410256s Jan 30 04:34:06.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:08.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:08.427: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:08.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 50.08948208s Jan 30 04:34:08.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:08.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.092219672s Jan 30 04:34:08.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:10.484: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-2w7z condition Ready to be true Jan 30 04:34:10.484: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-8989 condition Ready to be true Jan 30 04:34:10.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:10.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:10.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 52.090039859s Jan 30 04:34:10.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:10.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.093522121s Jan 30 04:34:10.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:12.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:12.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:12.875: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 54.129793846s Jan 30 04:34:12.875: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:12.876: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.130473878s Jan 30 04:34:12.876: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:14.628: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:14.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:14.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 56.090047052s Jan 30 04:34:14.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:14.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.09163064s Jan 30 04:34:14.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:16.675: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:16.675: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:16.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 58.089643292s Jan 30 04:34:16.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:16.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.091214836s Jan 30 04:34:16.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:18.724: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:18.724: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:18.846: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.1006379s Jan 30 04:34:18.846: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:18.846: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.101149887s Jan 30 04:34:18.846: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:20.771: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:20.771: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:20.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.092450273s Jan 30 04:34:20.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:20.838: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.092425372s Jan 30 04:34:20.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:22.818: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:22.818: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:22.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.089723568s Jan 30 04:34:22.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:22.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.092189372s Jan 30 04:34:22.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:24.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.092240819s Jan 30 04:34:24.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:24.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.093989079s Jan 30 04:34:24.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:24.864: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:24.864: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:26.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.090846887s Jan 30 04:34:26.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:26.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.092630131s Jan 30 04:34:26.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:26.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:26.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:28.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.089627743s Jan 30 04:34:28.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:28.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.092270362s Jan 30 04:34:28.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:28.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:28.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:30.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.090361359s Jan 30 04:34:30.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:30.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.092393227s Jan 30 04:34:30.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:31.003: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:31.003: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:32.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.089898052s Jan 30 04:34:32.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:32.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.091938406s Jan 30 04:34:32.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:33.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:33.051: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:34.840: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.095269721s Jan 30 04:34:34.840: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:34.850: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.105093373s Jan 30 04:34:34.850: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:35.097: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:35.097: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:36.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.091307678s Jan 30 04:34:36.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:36.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.092787842s Jan 30 04:34:36.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:37.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:37.143: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:38.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.089728206s Jan 30 04:34:38.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:38.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.091263588s Jan 30 04:34:38.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:39.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:39.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:40.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.090119181s Jan 30 04:34:40.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:40.840: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.094554227s Jan 30 04:34:40.840: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:41.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:41.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:42.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.090151374s Jan 30 04:34:42.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:42.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.091785691s Jan 30 04:34:42.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:43.299: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:43.299: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:44.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.092324588s Jan 30 04:34:44.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:44.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.093969811s Jan 30 04:34:44.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:45.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:45.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:46.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.092239833s Jan 30 04:34:46.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:46.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.093577844s Jan 30 04:34:46.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:47.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:47.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:48.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.089198587s Jan 30 04:34:48.834: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:48.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.091765284s Jan 30 04:34:48.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:49.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:49.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:50.838: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.092414712s Jan 30 04:34:50.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:50.845: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.100380849s Jan 30 04:34:50.845: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:51.490: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:51.490: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:52.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.089813704s Jan 30 04:34:52.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:52.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.091967966s Jan 30 04:34:52.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:53.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:53.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:54.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.089480993s Jan 30 04:34:54.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:54.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.092230785s Jan 30 04:34:54.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:55.583: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:55.583: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:56.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.090486097s Jan 30 04:34:56.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:56.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.092422024s Jan 30 04:34:56.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:57.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:57.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:58.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.089544712s Jan 30 04:34:58.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:58.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.091395218s Jan 30 04:34:58.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:59.673: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:59.674: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:00.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.092013579s Jan 30 04:35:00.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:00.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.093658608s Jan 30 04:35:00.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:01.719: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:01.719: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:02.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.090221883s Jan 30 04:35:02.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:02.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.09211628s Jan 30 04:35:02.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:03.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:03.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:04.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.089414214s Jan 30 04:35:04.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:04.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.091970745s Jan 30 04:35:04.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:05.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:05.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:06.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.089211521s Jan 30 04:35:06.834: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:06.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.091132599s Jan 30 04:35:06.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:07.859: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:07.859: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:08.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.089445293s Jan 30 04:35:08.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:08.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.092450427s Jan 30 04:35:08.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:09.903: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:09.904: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:10.847: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.101882397s Jan 30 04:35:10.847: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:10.848: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.103315284s Jan 30 04:35:10.848: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:11.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:11.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:12.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.089691904s Jan 30 04:35:12.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:12.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.092108432s Jan 30 04:35:12.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:13.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:13.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:14.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.089597992s Jan 30 04:35:14.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:14.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.09177707s Jan 30 04:35:14.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:16.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:16.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:16.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.091357728s Jan 30 04:35:16.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:16.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.093352524s Jan 30 04:35:16.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:18.083: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:18.085: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:18.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.09095707s Jan 30 04:35:18.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:18.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.094041585s Jan 30 04:35:18.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:20.126: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:20.128: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:20.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.092008037s Jan 30 04:35:20.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:20.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.093474355s Jan 30 04:35:20.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:22.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:22.171: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:22.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.090384234s Jan 30 04:35:22.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:22.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.091848327s Jan 30 04:35:22.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:24.213: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:24.215: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:24.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.090520498s Jan 30 04:35:24.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:24.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.092309694s Jan 30 04:35:24.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:26.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:26.259: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:26.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.08969131s Jan 30 04:35:26.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:26.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.091887235s Jan 30 04:35:26.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:28.306: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:28.306: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:28.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.090253607s Jan 30 04:35:28.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:28.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.092041752s Jan 30 04:35:28.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:30.352: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:30.352: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:30.853: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.107809447s Jan 30 04:35:30.853: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:30.853: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.108308222s Jan 30 04:35:30.853: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:32.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:32.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:32.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.089550714s Jan 30 04:35:32.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:32.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.091305268s Jan 30 04:35:32.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:34.442: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:34.442: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:34.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.089391518s Jan 30 04:35:34.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:34.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.091854055s Jan 30 04:35:34.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:36.487: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-8989 metadata-proxy-v0.1-27bcp] Jan 30 04:35:36.487: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-2w7z metadata-proxy-v0.1-hhh7h] Jan 30 04:35:36.487: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-hhh7h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:35:36.487: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-27bcp" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:35:36.487: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-8989" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:35:36.488: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-2w7z" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:35:36.534: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-2w7z": Phase="Running", Reason="", readiness=true. Elapsed: 46.321341ms Jan 30 04:35:36.534: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-2w7z" satisfied condition "running and ready, or succeeded" Jan 30 04:35:36.536: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8989": Phase="Running", Reason="", readiness=true. Elapsed: 48.695609ms Jan 30 04:35:36.536: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8989" satisfied condition "running and ready, or succeeded" Jan 30 04:35:36.536: INFO: Pod "metadata-proxy-v0.1-hhh7h": Phase="Running", Reason="", readiness=true. Elapsed: 48.815838ms Jan 30 04:35:36.536: INFO: Pod "metadata-proxy-v0.1-27bcp": Phase="Running", Reason="", readiness=true. Elapsed: 48.797183ms Jan 30 04:35:36.536: INFO: Pod "metadata-proxy-v0.1-hhh7h" satisfied condition "running and ready, or succeeded" Jan 30 04:35:36.536: INFO: Pod "metadata-proxy-v0.1-27bcp" satisfied condition "running and ready, or succeeded" Jan 30 04:35:36.536: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-2w7z metadata-proxy-v0.1-hhh7h] Jan 30 04:35:36.536: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-8989 metadata-proxy-v0.1-27bcp] Jan 30 04:35:36.536: INFO: Reboot successful on node bootstrap-e2e-minion-group-8989 Jan 30 04:35:36.536: INFO: Reboot successful on node bootstrap-e2e-minion-group-2w7z Jan 30 04:35:36.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.089426305s Jan 30 04:35:36.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:36.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.091969545s Jan 30 04:35:36.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:38.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.089575253s Jan 30 04:35:38.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:38.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.092117098s Jan 30 04:35:38.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:40.838: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.092602815s Jan 30 04:35:40.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:40.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.094055907s Jan 30 04:35:40.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:42.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.09198055s Jan 30 04:35:42.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:42.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.092056885s Jan 30 04:35:42.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:44.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.089349074s Jan 30 04:35:44.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:44.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.091082977s Jan 30 04:35:44.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:46.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.089566103s Jan 30 04:35:46.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:46.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.091984731s Jan 30 04:35:46.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:31.544: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.798698971s Jan 30 04:36:31.544: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.79861015s Jan 30 04:36:31.544: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:31.544: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:32.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.089304838s Jan 30 04:36:32.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:32.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.091636245s Jan 30 04:36:32.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:34.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.090069448s Jan 30 04:36:34.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:34.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.091804206s Jan 30 04:36:34.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:36.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.089644453s Jan 30 04:36:36.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:36.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.093054531s Jan 30 04:36:36.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:38.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.089762987s Jan 30 04:36:38.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:38.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.091472311s Jan 30 04:36:38.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:40.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.090487393s Jan 30 04:36:40.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:40.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.092387082s Jan 30 04:36:40.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:42.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.091572406s Jan 30 04:36:42.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:42.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.092832548s Jan 30 04:36:42.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:44.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.090843096s Jan 30 04:36:44.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:44.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.092790369s Jan 30 04:36:44.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:46.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.089366243s Jan 30 04:36:46.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:46.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.092015118s Jan 30 04:36:46.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:48.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.089219459s Jan 30 04:36:48.834: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:48.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.09165781s Jan 30 04:36:48.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:50.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.089497321s Jan 30 04:36:50.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:50.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.092009972s Jan 30 04:36:50.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:52.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.089338895s Jan 30 04:36:52.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:52.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.092034997s Jan 30 04:36:52.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:54.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.091485948s Jan 30 04:36:54.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:54.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.093409799s Jan 30 04:36:54.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:56.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.090020617s Jan 30 04:36:56.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:56.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.09159594s Jan 30 04:36:56.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:58.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.091278788s Jan 30 04:36:58.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:58.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.093649333s Jan 30 04:36:58.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:00.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.090113573s Jan 30 04:37:00.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:00.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.091832087s Jan 30 04:37:00.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:02.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.089470317s Jan 30 04:37:02.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:02.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.092358039s Jan 30 04:37:02.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:04.840: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.09471137s Jan 30 04:37:04.840: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:04.858: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.113415663s Jan 30 04:37:04.859: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:06.846: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.10127074s Jan 30 04:37:06.846: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:06.848: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.102647674s Jan 30 04:37:06.848: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:08.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.116926462s Jan 30 04:37:08.862: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:08.869: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.124410431s Jan 30 04:37:08.870: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:10.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.091474258s Jan 30 04:37:10.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:10.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.093326047s Jan 30 04:37:10.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:12.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.090945789s Jan 30 04:37:12.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:12.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.092386355s Jan 30 04:37:12.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:14.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.090453851s Jan 30 04:37:14.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:14.852: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.106909195s Jan 30 04:37:14.852: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:16.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.09136336s Jan 30 04:37:16.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:16.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.092684518s Jan 30 04:37:16.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:18.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.090281547s Jan 30 04:37:18.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:18.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.091734468s Jan 30 04:37:18.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:20.847: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.102113468s Jan 30 04:37:20.847: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:20.847: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.102109685s Jan 30 04:37:20.847: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:22.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.089507434s Jan 30 04:37:22.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:22.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.092069883s Jan 30 04:37:22.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:24.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.091792969s Jan 30 04:37:24.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:24.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.093436027s Jan 30 04:37:24.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:26.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.090878668s Jan 30 04:37:26.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:26.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.092720938s Jan 30 04:37:26.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:28.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.089660246s Jan 30 04:37:28.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:28.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.091364909s Jan 30 04:37:28.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:30.844: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.099440725s Jan 30 04:37:30.844: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.099340561s Jan 30 04:37:30.845: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:30.845: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:32.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.090402288s Jan 30 04:37:32.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:32.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.092255691s Jan 30 04:37:32.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:34.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.090413455s Jan 30 04:37:34.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:34.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.091712219s Jan 30 04:37:34.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:36.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.08930965s Jan 30 04:37:36.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:36.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.09196895s Jan 30 04:37:36.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:38.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.089813948s Jan 30 04:37:38.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:38.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.091322918s Jan 30 04:37:38.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:40.838: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.092624513s Jan 30 04:37:40.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:40.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.094343289s Jan 30 04:37:40.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:42.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.092296405s Jan 30 04:37:42.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.092397696s Jan 30 04:37:42.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:42.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:44.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.090788427s Jan 30 04:37:44.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:44.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.092344025s Jan 30 04:37:44.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:46.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.092122365s Jan 30 04:37:46.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:46.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.093378281s Jan 30 04:37:46.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:48.869: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.124084774s Jan 30 04:37:48.869: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:48.869: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.124232953s Jan 30 04:37:48.869: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:50.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.092315941s Jan 30 04:37:50.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:50.844: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.098691824s Jan 30 04:37:50.844: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:52.857: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.111674264s Jan 30 04:37:52.857: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:52.858: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.113320881s Jan 30 04:37:52.858: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:54.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.090030667s Jan 30 04:37:54.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:54.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.091967735s Jan 30 04:37:54.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:56.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.094260039s Jan 30 04:37:56.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:56.839: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.09423733s Jan 30 04:37:56.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:58.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.08924486s Jan 30 04:37:58.834: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:58.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.091800224s Jan 30 04:37:58.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:00.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.091401286s Jan 30 04:38:00.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:00.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.093243794s Jan 30 04:38:00.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:02.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.090713431s Jan 30 04:38:02.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:02.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.092304422s Jan 30 04:38:02.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:04.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.09006252s Jan 30 04:38:04.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:04.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.091961379s Jan 30 04:38:04.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:06.866: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.120754208s Jan 30 04:38:06.866: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:06.867: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.122154481s Jan 30 04:38:06.867: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:08.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.089386128s Jan 30 04:38:08.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:08.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.091774055s Jan 30 04:38:08.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:10.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.089780438s Jan 30 04:38:10.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:10.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.092771012s Jan 30 04:38:10.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:12.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.091371225s Jan 30 04:38:12.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:12.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.093001423s Jan 30 04:38:12.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:14.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.091164158s Jan 30 04:38:14.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:14.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.092879679s Jan 30 04:38:14.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:16.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.090382334s Jan 30 04:38:16.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:16.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.091887752s Jan 30 04:38:16.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards (Spec Runtime: 5m0.295s) test/e2e/cloud/gcp/reboot.go:136 In [It] (Node Runtime: 5m0s) test/e2e/cloud/gcp/reboot.go:136 Spec Goroutine goroutine 9252 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc000f15638?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f7f005c5620?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f7f005c5620?, 0xc0054c6680}, {0x8147128?, 0xc0039ce000}, {0xc0044681a0, 0x182}, 0xc004e36f60) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.7({0x7f7f005c5620, 0xc0054c6680}) test/e2e/cloud/gcp/reboot.go:141 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111f08?, 0xc0054c6680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 9241 [chan receive, 5 minutes] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x7f7f005c5620?, 0xc0054c6680}, {0x8147128?, 0xc0039ce000}, {0x76d190b, 0xb}, {0xc004daf7c0, 0x4, 0x4}, 0x45d964b800, ...) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReadyOrSucceeded(...) test/e2e/framework/pod/resource.go:508 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f7f005c5620, 0xc0054c6680}, {0x8147128, 0xc0039ce000}, {0x7fff5c36d5ea, 0x3}, {0xc00314f6c0, 0x1f}, {0xc0044681a0, 0x182}) test/e2e/cloud/gcp/reboot.go:284 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x2) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 30 04:38:18.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.090003218s Jan 30 04:38:18.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:18.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.092108893s Jan 30 04:38:18.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:18.877: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.131783255s Jan 30 04:38:18.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:18.877: INFO: Pod kube-dns-autoscaler-5f6455f985-vcng2 failed to be running and ready, or succeeded. Jan 30 04:38:18.879: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.133483875s Jan 30 04:38:18.879: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:18.879: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 30 04:38:18.879: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [metadata-proxy-v0.1-wqvwp volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-vcng2 kube-proxy-bootstrap-e2e-minion-group-pr8s] Jan 30 04:38:18.879: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 04:04:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 04:22:22 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 04:23:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 04:04:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP: PodIPs:[] StartTime:2023-01-30 04:04:43 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-30 04:21:35 +0000 UTC,FinishedAt:2023-01-30 04:22:19 +0000 UTC,ContainerID:containerd://162925a21856075cf49544f632652025c5f137b0f1b380565b1993c1123b20f0,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:8 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://162925a21856075cf49544f632652025c5f137b0f1b380565b1993c1123b20f0 Started:0xc003a43057}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 30 04:38:18.944: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller: Jan 30 04:38:18.944: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller: Jan 30 04:38:18.944: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-vcng2: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 04:04:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 04:22:22 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 04:23:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 04:04:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP: PodIPs:[] StartTime:2023-01-30 04:04:43 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-30 04:21:35 +0000 UTC,FinishedAt:2023-01-30 04:22:20 +0000 UTC,ContainerID:containerd://f40f34057e4800a1fc4369d165588cdb4c4762269c1a14a8b2ede3897ee7b792,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:7 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://f40f34057e4800a1fc4369d165588cdb4c4762269c1a14a8b2ede3897ee7b792 Started:0xc003a42657}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 30 04:38:18.989: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-vcng2/autoscaler: Jan 30 04:38:18.989: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-vcng2/autoscaler: Jan 30 04:38:18.989: INFO: Node bootstrap-e2e-minion-group-pr8s failed reboot test. Jan 30 04:38:18.989: INFO: Executing termination hook on nodes Jan 30 04:38:18.989: INFO: Getting external IP address for bootstrap-e2e-minion-group-2w7z Jan 30 04:38:18.989: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-2w7z(34.83.14.121:22) Jan 30 04:38:19.508: INFO: ssh prow@34.83.14.121:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 04:38:19.508: INFO: ssh prow@34.83.14.121:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 04:33:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 04:38:19.508: INFO: ssh prow@34.83.14.121:22: stderr: "" Jan 30 04:38:19.508: INFO: ssh prow@34.83.14.121:22: exit code: 0 Jan 30 04:38:19.508: INFO: Getting external IP address for bootstrap-e2e-minion-group-8989 Jan 30 04:38:19.508: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-8989(34.145.88.234:22) Jan 30 04:38:20.029: INFO: ssh prow@34.145.88.234:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 04:38:20.029: INFO: ssh prow@34.145.88.234:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 04:33:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 04:38:20.029: INFO: ssh prow@34.145.88.234:22: stderr: "" Jan 30 04:38:20.029: INFO: ssh prow@34.145.88.234:22: exit code: 0 Jan 30 04:38:20.029: INFO: Getting external IP address for bootstrap-e2e-minion-group-pr8s Jan 30 04:38:20.029: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-pr8s(34.168.173.250:22) Jan 30 04:38:20.567: INFO: ssh prow@34.168.173.250:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 04:38:20.567: INFO: ssh prow@34.168.173.250:22: stdout: "" Jan 30 04:38:20.567: INFO: ssh prow@34.168.173.250:22: stderr: "cat: /tmp/drop-inbound.log: No such file or directory\n" Jan 30 04:38:20.567: INFO: ssh prow@34.168.173.250:22: exit code: 1 Jan 30 04:38:20.567: INFO: Error while issuing ssh command: failed running "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log": <nil> (exit code 1, stderr cat: /tmp/drop-inbound.log: No such file or directory ) [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 04:38:20.567 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/30/23 04:38:20.567 (5m2.059s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 04:38:20.567 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/30/23 04:38:20.567 Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-9vnqf to bootstrap-e2e-minion-group-pr8s Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.407024514s (3.407035047s including waiting) Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-9vnqf_kube-system(81e628a9-68fb-4bf9-a0f3-07efd15135df) Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Readiness probe failed: Get "http://10.64.3.15:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-9vnqf Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-9vnqf Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-ts65r to bootstrap-e2e-minion-group-8989 Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.048544975s (1.048559529s including waiting) Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Killing: Stopping container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-ts65r Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-ts65r Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Killing: Stopping container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-ts65r Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-9vnqf Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-ts65r Jan 30 04:38:20.628: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 30 04:38:20.628: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 30 04:38:20.628: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 04:38:20.628: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 04:38:20.628: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 04:38:20.628: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 04:38:20.628: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: + exec /usr/local/bin/etcdctl --endpoints=127.0.0.1:2379 --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key --command-timeout=15s endpoint health Jan 30 04:38:20.628: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 30 04:38:20.628: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 04:38:20.628: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 04:38:20.628: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 04:38:20.628: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 30 04:38:20.628: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_e368a became leader Jan 30 04:38:20.628: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_a49a became leader Jan 30 04:38:20.628: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1499f became leader Jan 30 04:38:20.628: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ebdc5 became leader Jan 30 04:38:20.628: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_657e6 became leader Jan 30 04:38:20.628: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_80e5c became leader Jan 30 04:38:20.628: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1624 became leader Jan 30 04:38:20.628: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_8907e became leader Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-kfwd4 to bootstrap-e2e-minion-group-8989 Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 626.102447ms (626.126027ms including waiting) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Killing: Stopping container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-kfwd4_kube-system(dff52a9f-4523-49f5-adce-8d91398aa0ca) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Killing: Stopping container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-kfwd4_kube-system(dff52a9f-4523-49f5-adce-8d91398aa0ca) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Liveness probe failed: Get "http://10.64.2.16:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rzzz6 to bootstrap-e2e-minion-group-2w7z Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 649.515062ms (649.530311ms including waiting) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Liveness probe failed: Get "http://10.64.1.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Liveness probe failed: Get "http://10.64.1.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rzzz6_kube-system(907b1f90-0d41-4e45-be42-cb71fe53653b) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-wm5g7 to bootstrap-e2e-minion-group-pr8s Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 933.59456ms (933.605653ms including waiting) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Failed: Error: failed to get sandbox container task: no running task found: task f5fb933e314e02e8c688680c6515433f89f38b11e6128a51e48c4bb125c4e747 not found: not found Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Liveness probe failed: Get "http://10.64.3.17:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Liveness probe failed: Get "http://10.64.3.17:8093/healthz": dial tcp 10.64.3.17:8093: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-wm5g7_kube-system(f4233209-4be0-4cd8-94ce-ced438d88b3f) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-wm5g7 Jan 30 04:38:20.628: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rzzz6 Jan 30 04:38:20.628: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-kfwd4 Jan 30 04:38:20.628: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 30 04:38:20.628: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 30 04:38:20.628: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 30 04:38:20.628: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 30 04:38:20.628: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 30 04:38:20.628: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 30 04:38:20.628: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 30 04:38:20.628: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 30 04:38:20.628: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 30 04:38:20.628: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 04:38:20.628: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 30 04:38:20.628: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 30 04:38:20.628: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 30 04:38:20.628: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.628: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 30 04:38:20.628: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 30 04:38:20.628: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(d0b483a2668f277999bcc23ee75fc99e) Jan 30 04:38:20.628: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 30 04:38:20.628: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused Jan 30 04:38:20.628: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c6c2a8ca-a36c-403f-9999-a2b000b3920e became leader Jan 30 04:38:20.628: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_f291cda0-4aa2-4a2c-b2d0-0571517f319b became leader Jan 30 04:38:20.628: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c833dc2a-e038-4f45-b7ad-08d2638f0b9e became leader Jan 30 04:38:20.628: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2b9fcaeb-7945-435c-8c80-25e01cb35133 became leader Jan 30 04:38:20.628: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_01f555f6-f30b-41f2-b059-02563b17831c became leader Jan 30 04:38:20.628: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_6c1d9ef4-f4a8-4f03-97e6-e59820e12b3c became leader Jan 30 04:38:20.628: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_f84099c9-418f-455f-8646-356ff896dfa0 became leader Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-vcng2 to bootstrap-e2e-minion-group-pr8s Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.072385291s (3.072406123s including waiting) Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container autoscaler Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container autoscaler Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container autoscaler Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-vcng2_kube-system(5881f6ae-7dab-414e-bcbe-bad1b6578adb) Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-vcng2 Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-vcng2 Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-vcng2 Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-2w7z_kube-system(de89eacf2d0b5006d7508757b58cec1d) Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-2w7z_kube-system(de89eacf2d0b5006d7508757b58cec1d) Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Killing: Stopping container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-8989_kube-system(7391456f443d7cab197930929fc65610) Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-pr8s_kube-system(efb458e63148764d607d005f4ad36f66) Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-pr8s_kube-system(efb458e63148764d607d005f4ad36f66) Jan 30 04:38:20.629: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.629: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 30 04:38:20.629: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 30 04:38:20.629: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 30 04:38:20.629: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(f86a03c82069d9e676da0b89466a1071) Jan 30 04:38:20.629: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_832b3d7b-7090-4716-933e-249d446b7700 became leader Jan 30 04:38:20.629: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c8af7ead-b18f-4a96-ac6e-6319fcf78599 became leader Jan 30 04:38:20.629: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_66792645-ae07-449c-af44-6041568b48bf became leader Jan 30 04:38:20.629: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_57583faf-e90c-44f4-a218-579af7af084e became leader Jan 30 04:38:20.629: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e1367ebc-037a-4b50-866c-f11a6a850374 became leader Jan 30 04:38:20.629: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_5766692c-a1b5-4674-a69d-a922221c2db7 became leader Jan 30 04:38:20.629: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_f0787176-244e-4b86-847d-ff46a54637b1 became leader Jan 30 04:38:20.629: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c08b36c4-2380-4d34-932e-7ebb0016ce1b became leader Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-mh466 to bootstrap-e2e-minion-group-pr8s Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.406902484s (1.406912463s including waiting) Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container default-http-backend Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container default-http-backend Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Liveness probe failed: Get "http://10.64.3.6:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-mh466 Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-mh466 Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-mh466 Jan 30 04:38:20.629: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 30 04:38:20.629: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 30 04:38:20.629: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 30 04:38:20.629: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 30 04:38:20.629: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 30 04:38:20.629: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 30 04:38:20.629: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://10.138.0.2:8086/healthz": dial tcp 10.138.0.2:8086: connect: connection refused Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-27bcp to bootstrap-e2e-minion-group-8989 Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 799.8822ms (799.892307ms including waiting) Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.970227248s (1.970246647s including waiting) Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-8zhwm to bootstrap-e2e-master Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 1.580995675s (1.581005335s including waiting) Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.012795189s (2.012805077s including waiting) Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-hhh7h to bootstrap-e2e-minion-group-2w7z Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 813.017719ms (813.044243ms including waiting) Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.959476196s (1.959487572s including waiting) Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-wqvwp to bootstrap-e2e-minion-group-pr8s Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 777.552996ms (777.571077ms including waiting) Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.037405647s (2.037416731s including waiting) Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-8zhwm Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-hhh7h Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-27bcp Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-wqvwp Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-dv4lg to bootstrap-e2e-minion-group-pr8s Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.099427436s (2.099436859s including waiting) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container metrics-server Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container metrics-server Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.930381102s (2.930390437s including waiting) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container metrics-server-nanny Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container metrics-server-nanny Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container metrics-server Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container metrics-server-nanny Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-dv4lg Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-dv4lg Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-4d2dq to bootstrap-e2e-minion-group-2w7z Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.854731096s (1.854741893s including waiting) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metrics-server Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metrics-server Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.024346656s (1.024359447s including waiting) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metrics-server-nanny Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metrics-server-nanny Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container metrics-server Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container metrics-server-nanny Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-4d2dq_kube-system(255d13f5-f893-4d1d-9807-59a67e85d69e) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-4d2dq Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metrics-server Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metrics-server Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metrics-server-nanny Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metrics-server-nanny Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Readiness probe failed: Get "https://10.64.1.11:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Liveness probe failed: Get "https://10.64.1.11:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-4d2dq Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-4d2dq Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-4d2dq Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-pr8s Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.789389581s (3.789407141s including waiting) Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container volume-snapshot-controller Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container volume-snapshot-controller Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container volume-snapshot-controller Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(e18f5204-5261-40fa-8f57-029fca0d6f08) Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 04:38:20.629 (62ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 04:38:20.629 Jan 30 04:38:20.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 04:38:20.674 (45ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 04:38:20.674 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 04:38:20.674 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 04:38:20.674 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 04:38:20.675 STEP: Collecting events from namespace "reboot-8598". - test/e2e/framework/debug/dump.go:42 @ 01/30/23 04:38:20.675 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/30/23 04:38:20.718 Jan 30 04:38:20.760: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 04:38:20.760: INFO: Jan 30 04:38:20.809: INFO: Logging node info for node bootstrap-e2e-master Jan 30 04:38:20.853: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 6f4de288-21eb-465e-a25d-71a0f115d23a 4115 0 2023-01-30 04:04:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 04:04:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-30 04:04:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 04:04:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 04:36:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-gce-ci/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858378752 0} {<nil>} 3767948Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596234752 0} {<nil>} 3511948Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 04:04:43 +0000 UTC,LastTransitionTime:2023-01-30 04:04:43 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 04:36:28 +0000 UTC,LastTransitionTime:2023-01-30 04:04:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 04:36:28 +0000 UTC,LastTransitionTime:2023-01-30 04:04:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 04:36:28 +0000 UTC,LastTransitionTime:2023-01-30 04:04:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 04:36:28 +0000 UTC,LastTransitionTime:2023-01-30 04:04:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.49.246,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2f99bd7dadbd46f22ce4edb25d7437ee,SystemUUID:2f99bd7d-adbd-46f2-2ce4-edb25d7437ee,BootID:e341edb6-7aff-48fb-a607-613234201f7f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-11-g9857b5d1b,KubeletVersion:v1.27.0-alpha.1.80+97636ed7810137,KubeProxyVersion:v1.27.0-alpha.1.80+97636ed7810137,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:135961043,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:125279033,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:57551672,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 04:38:20.853: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 30 04:38:20.907: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 30 04:38:20.969: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-30 04:03:40 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:20.969: INFO: Container konnectivity-server-container ready: true, restart count 3 Jan 30 04:38:20.969: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-30 04:03:41 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:20.969: INFO: Container kube-scheduler ready: true, restart count 8 Jan 30 04:38:20.969: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-30 04:03:59 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:20.969: INFO: Container l7-lb-controller ready: true, restart count 9 Jan 30 04:38:20.969: INFO: metadata-proxy-v0.1-8zhwm started at 2023-01-30 04:05:00 +0000 UTC (0+2 container statuses recorded) Jan 30 04:38:20.969: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 04:38:20.969: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 04:38:20.969: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-30 04:03:40 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:20.969: INFO: Container etcd-container ready: true, restart count 3 Jan 30 04:38:20.969: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-30 04:03:40 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:20.969: INFO: Container kube-apiserver ready: true, restart count 2 Jan 30 04:38:20.969: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-30 04:03:40 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:20.969: INFO: Container kube-controller-manager ready: true, restart count 9 Jan 30 04:38:20.969: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-30 04:03:59 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:20.969: INFO: Container kube-addon-manager ready: true, restart count 2 Jan 30 04:38:20.969: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-30 04:03:40 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:20.969: INFO: Container etcd-container ready: true, restart count 5 Jan 30 04:38:21.141: INFO: Latency metrics for node bootstrap-e2e-master Jan 30 04:38:21.141: INFO: Logging node info for node bootstrap-e2e-minion-group-2w7z Jan 30 04:38:21.184: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-2w7z 0dc9c89e-8b35-476f-a0b5-71d6a867b027 4093 0 2023-01-30 04:04:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-2w7z kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 04:04:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 04:34:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 04:35:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 04:35:31 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{"f:address":{}},"k:{\"type\":\"InternalIP\"}":{"f:address":{}}},"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-30 04:35:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-gce-ci/us-west1-b/bootstrap-e2e-minion-group-2w7z,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 04:04:43 +0000 UTC,LastTransitionTime:2023-01-30 04:04:43 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 04:35:31 +0000 UTC,LastTransitionTime:2023-01-30 04:35:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 04:35:31 +0000 UTC,LastTransitionTime:2023-01-30 04:35:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 04:35:31 +0000 UTC,LastTransitionTime:2023-01-30 04:35:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 04:35:31 +0000 UTC,LastTransitionTime:2023-01-30 04:35:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.14.121,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-2w7z.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-2w7z.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3a2d81fdfb5c6322fd13b8b18a04da55,SystemUUID:3a2d81fd-fb5c-6322-fd13-b8b18a04da55,BootID:9a5d1d65-a056-45ad-aa38-e844f34fefec,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-11-g9857b5d1b,KubeletVersion:v1.27.0-alpha.1.80+97636ed7810137,KubeProxyVersion:v1.27.0-alpha.1.80+97636ed7810137,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 04:38:21.184: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-2w7z Jan 30 04:38:21.235: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-2w7z Jan 30 04:38:21.300: INFO: konnectivity-agent-rzzz6 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.300: INFO: Container konnectivity-agent ready: false, restart count 6 Jan 30 04:38:21.300: INFO: metrics-server-v0.5.2-867b8754b9-4d2dq started at 2023-01-30 04:05:06 +0000 UTC (0+2 container statuses recorded) Jan 30 04:38:21.300: INFO: Container metrics-server ready: false, restart count 4 Jan 30 04:38:21.300: INFO: Container metrics-server-nanny ready: false, restart count 5 Jan 30 04:38:21.300: INFO: kube-proxy-bootstrap-e2e-minion-group-2w7z started at 2023-01-30 04:04:29 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.300: INFO: Container kube-proxy ready: true, restart count 8 Jan 30 04:38:21.300: INFO: metadata-proxy-v0.1-hhh7h started at 2023-01-30 04:04:30 +0000 UTC (0+2 container statuses recorded) Jan 30 04:38:21.300: INFO: Container metadata-proxy ready: true, restart count 3 Jan 30 04:38:21.300: INFO: Container prometheus-to-sd-exporter ready: true, restart count 3 Jan 30 04:38:21.475: INFO: Latency metrics for node bootstrap-e2e-minion-group-2w7z Jan 30 04:38:21.475: INFO: Logging node info for node bootstrap-e2e-minion-group-8989 Jan 30 04:38:21.518: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-8989 4a75ddd1-ef06-47df-ade8-574d74cb42ab 4092 0 2023-01-30 04:04:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-8989 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 04:04:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 04:34:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 04:35:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 04:35:31 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-30 04:35:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-gce-ci/us-west1-b/bootstrap-e2e-minion-group-8989,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 04:04:43 +0000 UTC,LastTransitionTime:2023-01-30 04:04:43 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 04:35:31 +0000 UTC,LastTransitionTime:2023-01-30 04:35:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 04:35:31 +0000 UTC,LastTransitionTime:2023-01-30 04:35:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 04:35:31 +0000 UTC,LastTransitionTime:2023-01-30 04:35:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 04:35:31 +0000 UTC,LastTransitionTime:2023-01-30 04:35:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.145.88.234,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-8989.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-8989.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dd7b470886f95ec884f70a2ac96a6ad7,SystemUUID:dd7b4708-86f9-5ec8-84f7-0a2ac96a6ad7,BootID:acf91d68-25f3-43f8-801b-71f8229c37d6,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-11-g9857b5d1b,KubeletVersion:v1.27.0-alpha.1.80+97636ed7810137,KubeProxyVersion:v1.27.0-alpha.1.80+97636ed7810137,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 04:38:21.518: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-8989 Jan 30 04:38:21.565: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-8989 Jan 30 04:38:21.629: INFO: konnectivity-agent-kfwd4 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.629: INFO: Container konnectivity-agent ready: true, restart count 11 Jan 30 04:38:21.629: INFO: coredns-6846b5b5f-ts65r started at 2023-01-30 04:04:52 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.629: INFO: Container coredns ready: false, restart count 6 Jan 30 04:38:21.629: INFO: kube-proxy-bootstrap-e2e-minion-group-8989 started at 2023-01-30 04:04:33 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.629: INFO: Container kube-proxy ready: true, restart count 6 Jan 30 04:38:21.629: INFO: metadata-proxy-v0.1-27bcp started at 2023-01-30 04:04:34 +0000 UTC (0+2 container statuses recorded) Jan 30 04:38:21.629: INFO: Container metadata-proxy ready: true, restart count 3 Jan 30 04:38:21.629: INFO: Container prometheus-to-sd-exporter ready: true, restart count 3 Jan 30 04:38:21.794: INFO: Latency metrics for node bootstrap-e2e-minion-group-8989 Jan 30 04:38:21.794: INFO: Logging node info for node bootstrap-e2e-minion-group-pr8s Jan 30 04:38:21.837: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-pr8s 3cd1e5e1-5c3f-4d16-a492-09b76a02380e 4311 0 2023-01-30 04:04:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-pr8s kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 04:04:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 04:22:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 04:23:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 04:33:21 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-30 04:38:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-gce-ci/us-west1-b/bootstrap-e2e-minion-group-pr8s,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 04:38:11 +0000 UTC,LastTransitionTime:2023-01-30 04:23:08 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 04:38:11 +0000 UTC,LastTransitionTime:2023-01-30 04:23:08 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 04:38:11 +0000 UTC,LastTransitionTime:2023-01-30 04:23:08 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 04:38:11 +0000 UTC,LastTransitionTime:2023-01-30 04:23:08 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 04:38:11 +0000 UTC,LastTransitionTime:2023-01-30 04:23:08 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 04:38:11 +0000 UTC,LastTransitionTime:2023-01-30 04:23:08 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 04:38:11 +0000 UTC,LastTransitionTime:2023-01-30 04:23:08 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 04:04:43 +0000 UTC,LastTransitionTime:2023-01-30 04:04:43 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 04:33:21 +0000 UTC,LastTransitionTime:2023-01-30 04:23:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 04:33:21 +0000 UTC,LastTransitionTime:2023-01-30 04:23:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 04:33:21 +0000 UTC,LastTransitionTime:2023-01-30 04:23:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 04:33:21 +0000 UTC,LastTransitionTime:2023-01-30 04:23:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.173.250,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-pr8s.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-pr8s.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9bbcb548c0d5032d8123f6780ca06f95,SystemUUID:9bbcb548-c0d5-032d-8123-f6780ca06f95,BootID:20621036-c2c2-44b6-993d-20c8d5436e83,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-11-g9857b5d1b,KubeletVersion:v1.27.0-alpha.1.80+97636ed7810137,KubeProxyVersion:v1.27.0-alpha.1.80+97636ed7810137,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 04:38:21.838: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-pr8s Jan 30 04:38:21.885: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-pr8s Jan 30 04:38:21.938: INFO: konnectivity-agent-wm5g7 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.938: INFO: Container konnectivity-agent ready: false, restart count 6 Jan 30 04:38:21.938: INFO: kube-proxy-bootstrap-e2e-minion-group-pr8s started at 2023-01-30 04:04:33 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.938: INFO: Container kube-proxy ready: true, restart count 11 Jan 30 04:38:21.938: INFO: l7-default-backend-8549d69d99-mh466 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.938: INFO: Container default-http-backend ready: false, restart count 2 Jan 30 04:38:21.938: INFO: kube-dns-autoscaler-5f6455f985-vcng2 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.938: INFO: Container autoscaler ready: false, restart count 7 Jan 30 04:38:21.938: INFO: volume-snapshot-controller-0 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.938: INFO: Container volume-snapshot-controller ready: false, restart count 8 Jan 30 04:38:21.938: INFO: coredns-6846b5b5f-9vnqf started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.938: INFO: Container coredns ready: false, restart count 6 Jan 30 04:38:21.938: INFO: metadata-proxy-v0.1-wqvwp started at 2023-01-30 04:04:34 +0000 UTC (0+2 container statuses recorded) Jan 30 04:38:21.938: INFO: Container metadata-proxy ready: true, restart count 2 Jan 30 04:38:21.938: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 30 04:38:22.103: INFO: Latency metrics for node bootstrap-e2e-minion-group-pr8s END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 04:38:22.103 (1.428s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 04:38:22.103 (1.429s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 04:38:22.103 STEP: Destroying namespace "reboot-8598" for this suite. - test/e2e/framework/framework.go:347 @ 01/30/23 04:38:22.103 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 04:38:22.148 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 04:38:22.149 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 04:38:22.149 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 04:38:20.567from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 04:33:18.214 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 04:33:18.214 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 04:33:18.214 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/30/23 04:33:18.214 Jan 30 04:33:18.214: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/30/23 04:33:18.215 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/30/23 04:33:18.344 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/30/23 04:33:18.427 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 04:33:18.508 (294ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 04:33:18.508 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 04:33:18.508 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/30/23 04:33:18.508 Jan 30 04:33:18.606: INFO: Getting bootstrap-e2e-minion-group-2w7z Jan 30 04:33:18.651: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-2w7z condition Ready to be true Jan 30 04:33:18.655: INFO: Getting bootstrap-e2e-minion-group-pr8s Jan 30 04:33:18.655: INFO: Getting bootstrap-e2e-minion-group-8989 Jan 30 04:33:18.693: INFO: Node bootstrap-e2e-minion-group-2w7z has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-2w7z metadata-proxy-v0.1-hhh7h] Jan 30 04:33:18.693: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-2w7z metadata-proxy-v0.1-hhh7h] Jan 30 04:33:18.693: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-hhh7h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:33:18.694: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-2w7z" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:33:18.701: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-pr8s condition Ready to be true Jan 30 04:33:18.701: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-8989 condition Ready to be true Jan 30 04:33:18.737: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-2w7z": Phase="Running", Reason="", readiness=true. Elapsed: 43.331156ms Jan 30 04:33:18.737: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-2w7z" satisfied condition "running and ready, or succeeded" Jan 30 04:33:18.737: INFO: Pod "metadata-proxy-v0.1-hhh7h": Phase="Running", Reason="", readiness=true. Elapsed: 43.648483ms Jan 30 04:33:18.737: INFO: Pod "metadata-proxy-v0.1-hhh7h" satisfied condition "running and ready, or succeeded" Jan 30 04:33:18.737: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-2w7z metadata-proxy-v0.1-hhh7h] Jan 30 04:33:18.737: INFO: Getting external IP address for bootstrap-e2e-minion-group-2w7z Jan 30 04:33:18.737: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-2w7z(34.83.14.121:22) Jan 30 04:33:18.745: INFO: Node bootstrap-e2e-minion-group-pr8s has 4 assigned pods with no liveness probes: [metadata-proxy-v0.1-wqvwp volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-vcng2 kube-proxy-bootstrap-e2e-minion-group-pr8s] Jan 30 04:33:18.745: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-wqvwp volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-vcng2 kube-proxy-bootstrap-e2e-minion-group-pr8s] Jan 30 04:33:18.745: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-pr8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:33:18.745: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:33:18.745: INFO: Node bootstrap-e2e-minion-group-8989 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-8989 metadata-proxy-v0.1-27bcp] Jan 30 04:33:18.745: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-8989 metadata-proxy-v0.1-27bcp] Jan 30 04:33:18.745: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-wqvwp" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:33:18.745: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-27bcp" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:33:18.745: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-vcng2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:33:18.745: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-8989" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:33:18.792: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-pr8s": Phase="Running", Reason="", readiness=true. Elapsed: 47.266601ms Jan 30 04:33:18.792: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-pr8s" satisfied condition "running and ready, or succeeded" Jan 30 04:33:18.792: INFO: Pod "metadata-proxy-v0.1-wqvwp": Phase="Running", Reason="", readiness=true. Elapsed: 47.203184ms Jan 30 04:33:18.792: INFO: Pod "metadata-proxy-v0.1-wqvwp" satisfied condition "running and ready, or succeeded" Jan 30 04:33:18.792: INFO: Pod "metadata-proxy-v0.1-27bcp": Phase="Running", Reason="", readiness=true. Elapsed: 47.252409ms Jan 30 04:33:18.792: INFO: Pod "metadata-proxy-v0.1-27bcp" satisfied condition "running and ready, or succeeded" Jan 30 04:33:18.792: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 47.276134ms Jan 30 04:33:18.792: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:18.795: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8989": Phase="Running", Reason="", readiness=true. Elapsed: 49.639319ms Jan 30 04:33:18.795: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8989" satisfied condition "running and ready, or succeeded" Jan 30 04:33:18.795: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 49.817134ms Jan 30 04:33:18.795: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-8989 metadata-proxy-v0.1-27bcp] Jan 30 04:33:18.795: INFO: Getting external IP address for bootstrap-e2e-minion-group-8989 Jan 30 04:33:18.795: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-8989(34.145.88.234:22) Jan 30 04:33:18.795: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:19.269: INFO: ssh prow@34.83.14.121:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 30 04:33:19.269: INFO: ssh prow@34.83.14.121:22: stdout: "" Jan 30 04:33:19.269: INFO: ssh prow@34.83.14.121:22: stderr: "" Jan 30 04:33:19.269: INFO: ssh prow@34.83.14.121:22: exit code: 0 Jan 30 04:33:19.269: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-2w7z condition Ready to be false Jan 30 04:33:19.312: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:19.336: INFO: ssh prow@34.145.88.234:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 30 04:33:19.336: INFO: ssh prow@34.145.88.234:22: stdout: "" Jan 30 04:33:19.336: INFO: ssh prow@34.145.88.234:22: stderr: "" Jan 30 04:33:19.336: INFO: ssh prow@34.145.88.234:22: exit code: 0 Jan 30 04:33:19.336: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-8989 condition Ready to be false Jan 30 04:33:19.379: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:20.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2.089936664s Jan 30 04:33:20.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:20.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.094028154s Jan 30 04:33:20.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:21.356: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:21.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:22.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4.089882464s Jan 30 04:33:22.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:22.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.092548238s Jan 30 04:33:22.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:23.403: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:23.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:24.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 6.090423891s Jan 30 04:33:24.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:24.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.092865096s Jan 30 04:33:24.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:25.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:25.510: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:26.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 8.090006177s Jan 30 04:33:26.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:26.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.092635922s Jan 30 04:33:26.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:27.519: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:27.554: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:28.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 10.089884081s Jan 30 04:33:28.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:28.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.092206235s Jan 30 04:33:28.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:29.563: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:29.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:30.838: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 12.093026942s Jan 30 04:33:30.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:30.846: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.100904617s Jan 30 04:33:30.846: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:31.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:31.640: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:32.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 14.090029438s Jan 30 04:33:32.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:32.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.091967254s Jan 30 04:33:32.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:33.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:33.683: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:34.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 16.089718048s Jan 30 04:33:34.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:34.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.092025652s Jan 30 04:33:34.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:35.694: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:35.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:36.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 18.089382876s Jan 30 04:33:36.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:36.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.091767543s Jan 30 04:33:36.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:37.737: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:37.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:38.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 20.089666588s Jan 30 04:33:38.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:38.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.092154208s Jan 30 04:33:38.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:39.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:39.815: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:40.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 22.091480838s Jan 30 04:33:40.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:40.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.093675374s Jan 30 04:33:40.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:41.824: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:41.857: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:42.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 24.08993271s Jan 30 04:33:42.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:42.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.091769529s Jan 30 04:33:42.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:43.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:43.901: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:44.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 26.089458883s Jan 30 04:33:44.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:44.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.091131158s Jan 30 04:33:44.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:45.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:45.946: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:46.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 28.089462174s Jan 30 04:33:46.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:46.840: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.094930449s Jan 30 04:33:46.840: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:47.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:47.990: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:48.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 30.089269025s Jan 30 04:33:48.834: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:48.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.091730521s Jan 30 04:33:48.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:50.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:50.034: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:50.838: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 32.092360915s Jan 30 04:33:50.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:50.848: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.103372172s Jan 30 04:33:50.848: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:52.043: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:52.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:52.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 34.090780232s Jan 30 04:33:52.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:52.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.09249274s Jan 30 04:33:52.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:54.088: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:54.120: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:54.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 36.090421308s Jan 30 04:33:54.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:54.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.092195478s Jan 30 04:33:54.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:56.133: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:56.167: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:56.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 38.089750203s Jan 30 04:33:56.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:56.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.092129059s Jan 30 04:33:56.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:58.186: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:58.210: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:33:58.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 40.089684104s Jan 30 04:33:58.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:33:58.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.092094046s Jan 30 04:33:58.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:00.229: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:00.253: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:00.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 42.089696922s Jan 30 04:34:00.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:00.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.092283232s Jan 30 04:34:00.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:02.274: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:02.296: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:02.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 44.088749098s Jan 30 04:34:02.834: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:02.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.091779976s Jan 30 04:34:02.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:04.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:04.340: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:04.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 46.091610014s Jan 30 04:34:04.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:04.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.093244183s Jan 30 04:34:04.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:06.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:06.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:06.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 48.089829917s Jan 30 04:34:06.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:06.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.091410256s Jan 30 04:34:06.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:08.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:08.427: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:34:08.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 50.08948208s Jan 30 04:34:08.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:08.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.092219672s Jan 30 04:34:08.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:10.484: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-2w7z condition Ready to be true Jan 30 04:34:10.484: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-8989 condition Ready to be true Jan 30 04:34:10.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:10.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:10.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 52.090039859s Jan 30 04:34:10.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:10.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.093522121s Jan 30 04:34:10.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:12.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:12.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:12.875: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 54.129793846s Jan 30 04:34:12.875: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:12.876: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.130473878s Jan 30 04:34:12.876: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:14.628: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:14.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:14.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 56.090047052s Jan 30 04:34:14.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:14.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.09163064s Jan 30 04:34:14.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:16.675: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:16.675: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:16.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 58.089643292s Jan 30 04:34:16.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:16.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.091214836s Jan 30 04:34:16.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:18.724: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:18.724: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:18.846: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.1006379s Jan 30 04:34:18.846: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:18.846: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.101149887s Jan 30 04:34:18.846: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:20.771: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:20.771: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:20.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.092450273s Jan 30 04:34:20.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:20.838: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.092425372s Jan 30 04:34:20.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:22.818: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:22.818: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:22.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.089723568s Jan 30 04:34:22.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:22.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.092189372s Jan 30 04:34:22.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:24.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.092240819s Jan 30 04:34:24.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:24.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.093989079s Jan 30 04:34:24.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:24.864: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:24.864: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 04:34:26.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.090846887s Jan 30 04:34:26.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:26.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.092630131s Jan 30 04:34:26.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:26.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:26.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:28.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.089627743s Jan 30 04:34:28.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:28.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.092270362s Jan 30 04:34:28.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:28.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:28.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:30.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.090361359s Jan 30 04:34:30.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:30.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.092393227s Jan 30 04:34:30.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:31.003: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:31.003: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:32.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.089898052s Jan 30 04:34:32.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:32.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.091938406s Jan 30 04:34:32.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:33.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:33.051: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:34.840: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.095269721s Jan 30 04:34:34.840: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:34.850: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.105093373s Jan 30 04:34:34.850: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:35.097: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:35.097: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:36.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.091307678s Jan 30 04:34:36.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:36.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.092787842s Jan 30 04:34:36.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:37.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:37.143: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:38.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.089728206s Jan 30 04:34:38.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:38.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.091263588s Jan 30 04:34:38.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:39.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:39.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:40.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.090119181s Jan 30 04:34:40.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:40.840: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.094554227s Jan 30 04:34:40.840: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:41.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:41.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:42.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.090151374s Jan 30 04:34:42.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:42.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.091785691s Jan 30 04:34:42.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:43.299: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:43.299: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:44.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.092324588s Jan 30 04:34:44.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:44.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.093969811s Jan 30 04:34:44.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:45.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:45.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:46.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.092239833s Jan 30 04:34:46.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:46.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.093577844s Jan 30 04:34:46.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:47.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:47.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:48.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.089198587s Jan 30 04:34:48.834: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:48.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.091765284s Jan 30 04:34:48.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:49.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:49.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:50.838: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.092414712s Jan 30 04:34:50.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:50.845: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.100380849s Jan 30 04:34:50.845: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:51.490: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:51.490: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:52.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.089813704s Jan 30 04:34:52.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:52.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.091967966s Jan 30 04:34:52.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:53.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:53.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:54.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.089480993s Jan 30 04:34:54.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:54.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.092230785s Jan 30 04:34:54.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:55.583: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:55.583: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:56.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.090486097s Jan 30 04:34:56.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:56.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.092422024s Jan 30 04:34:56.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:57.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:57.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:34:58.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.089544712s Jan 30 04:34:58.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:58.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.091395218s Jan 30 04:34:58.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:34:59.673: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:34:59.674: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:00.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.092013579s Jan 30 04:35:00.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:00.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.093658608s Jan 30 04:35:00.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:01.719: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:01.719: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:02.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.090221883s Jan 30 04:35:02.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:02.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.09211628s Jan 30 04:35:02.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:03.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:03.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:04.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.089414214s Jan 30 04:35:04.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:04.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.091970745s Jan 30 04:35:04.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:05.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:05.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:06.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.089211521s Jan 30 04:35:06.834: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:06.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.091132599s Jan 30 04:35:06.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:07.859: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:07.859: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:08.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.089445293s Jan 30 04:35:08.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:08.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.092450427s Jan 30 04:35:08.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:09.903: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:09.904: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:10.847: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.101882397s Jan 30 04:35:10.847: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:10.848: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.103315284s Jan 30 04:35:10.848: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:11.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:11.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:12.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.089691904s Jan 30 04:35:12.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:12.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.092108432s Jan 30 04:35:12.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:13.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:13.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:14.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.089597992s Jan 30 04:35:14.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:14.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.09177707s Jan 30 04:35:14.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:16.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:16.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:16.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.091357728s Jan 30 04:35:16.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:16.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.093352524s Jan 30 04:35:16.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:18.083: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:18.085: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:18.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.09095707s Jan 30 04:35:18.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:18.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.094041585s Jan 30 04:35:18.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:20.126: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:20.128: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:20.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.092008037s Jan 30 04:35:20.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:20.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.093474355s Jan 30 04:35:20.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:22.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:22.171: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:22.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.090384234s Jan 30 04:35:22.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:22.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.091848327s Jan 30 04:35:22.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:24.213: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:24.215: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:24.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.090520498s Jan 30 04:35:24.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:24.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.092309694s Jan 30 04:35:24.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:26.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:26.259: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:26.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.08969131s Jan 30 04:35:26.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:26.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.091887235s Jan 30 04:35:26.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:28.306: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:28.306: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:28.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.090253607s Jan 30 04:35:28.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:28.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.092041752s Jan 30 04:35:28.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:30.352: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:30.352: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 04:34:09 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:30.853: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.107809447s Jan 30 04:35:30.853: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:30.853: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.108308222s Jan 30 04:35:30.853: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:32.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:32.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:32.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.089550714s Jan 30 04:35:32.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:32.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.091305268s Jan 30 04:35:32.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:34.442: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:24 +0000 UTC}]. Failure Jan 30 04:35:34.442: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 04:34:14 +0000 UTC}]. Failure Jan 30 04:35:34.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.089391518s Jan 30 04:35:34.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:34.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.091854055s Jan 30 04:35:34.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:36.487: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-8989 metadata-proxy-v0.1-27bcp] Jan 30 04:35:36.487: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-2w7z metadata-proxy-v0.1-hhh7h] Jan 30 04:35:36.487: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-hhh7h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:35:36.487: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-27bcp" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:35:36.487: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-8989" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:35:36.488: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-2w7z" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:35:36.534: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-2w7z": Phase="Running", Reason="", readiness=true. Elapsed: 46.321341ms Jan 30 04:35:36.534: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-2w7z" satisfied condition "running and ready, or succeeded" Jan 30 04:35:36.536: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8989": Phase="Running", Reason="", readiness=true. Elapsed: 48.695609ms Jan 30 04:35:36.536: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8989" satisfied condition "running and ready, or succeeded" Jan 30 04:35:36.536: INFO: Pod "metadata-proxy-v0.1-hhh7h": Phase="Running", Reason="", readiness=true. Elapsed: 48.815838ms Jan 30 04:35:36.536: INFO: Pod "metadata-proxy-v0.1-27bcp": Phase="Running", Reason="", readiness=true. Elapsed: 48.797183ms Jan 30 04:35:36.536: INFO: Pod "metadata-proxy-v0.1-hhh7h" satisfied condition "running and ready, or succeeded" Jan 30 04:35:36.536: INFO: Pod "metadata-proxy-v0.1-27bcp" satisfied condition "running and ready, or succeeded" Jan 30 04:35:36.536: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-2w7z metadata-proxy-v0.1-hhh7h] Jan 30 04:35:36.536: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-8989 metadata-proxy-v0.1-27bcp] Jan 30 04:35:36.536: INFO: Reboot successful on node bootstrap-e2e-minion-group-8989 Jan 30 04:35:36.536: INFO: Reboot successful on node bootstrap-e2e-minion-group-2w7z Jan 30 04:35:36.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.089426305s Jan 30 04:35:36.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:36.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.091969545s Jan 30 04:35:36.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:38.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.089575253s Jan 30 04:35:38.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:38.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.092117098s Jan 30 04:35:38.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:40.838: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.092602815s Jan 30 04:35:40.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:40.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.094055907s Jan 30 04:35:40.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:42.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.09198055s Jan 30 04:35:42.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:42.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.092056885s Jan 30 04:35:42.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:44.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.089349074s Jan 30 04:35:44.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:44.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.091082977s Jan 30 04:35:44.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:46.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.089566103s Jan 30 04:35:46.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:35:46.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.091984731s Jan 30 04:35:46.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:31.544: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.798698971s Jan 30 04:36:31.544: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.79861015s Jan 30 04:36:31.544: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:31.544: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:32.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.089304838s Jan 30 04:36:32.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:32.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.091636245s Jan 30 04:36:32.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:34.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.090069448s Jan 30 04:36:34.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:34.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.091804206s Jan 30 04:36:34.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:36.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.089644453s Jan 30 04:36:36.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:36.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.093054531s Jan 30 04:36:36.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:38.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.089762987s Jan 30 04:36:38.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:38.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.091472311s Jan 30 04:36:38.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:40.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.090487393s Jan 30 04:36:40.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:40.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.092387082s Jan 30 04:36:40.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:42.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.091572406s Jan 30 04:36:42.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:42.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.092832548s Jan 30 04:36:42.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:44.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.090843096s Jan 30 04:36:44.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:44.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.092790369s Jan 30 04:36:44.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:46.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.089366243s Jan 30 04:36:46.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:46.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.092015118s Jan 30 04:36:46.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:48.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.089219459s Jan 30 04:36:48.834: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:48.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.09165781s Jan 30 04:36:48.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:50.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.089497321s Jan 30 04:36:50.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:50.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.092009972s Jan 30 04:36:50.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:52.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.089338895s Jan 30 04:36:52.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:52.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.092034997s Jan 30 04:36:52.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:54.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.091485948s Jan 30 04:36:54.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:54.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.093409799s Jan 30 04:36:54.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:56.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.090020617s Jan 30 04:36:56.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:56.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.09159594s Jan 30 04:36:56.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:58.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.091278788s Jan 30 04:36:58.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:36:58.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.093649333s Jan 30 04:36:58.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:00.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.090113573s Jan 30 04:37:00.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:00.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.091832087s Jan 30 04:37:00.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:02.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.089470317s Jan 30 04:37:02.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:02.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.092358039s Jan 30 04:37:02.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:04.840: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.09471137s Jan 30 04:37:04.840: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:04.858: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.113415663s Jan 30 04:37:04.859: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:06.846: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.10127074s Jan 30 04:37:06.846: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:06.848: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.102647674s Jan 30 04:37:06.848: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:08.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.116926462s Jan 30 04:37:08.862: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:08.869: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.124410431s Jan 30 04:37:08.870: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:10.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.091474258s Jan 30 04:37:10.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:10.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.093326047s Jan 30 04:37:10.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:12.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.090945789s Jan 30 04:37:12.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:12.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.092386355s Jan 30 04:37:12.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:14.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.090453851s Jan 30 04:37:14.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:14.852: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.106909195s Jan 30 04:37:14.852: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:16.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.09136336s Jan 30 04:37:16.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:16.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.092684518s Jan 30 04:37:16.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:18.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.090281547s Jan 30 04:37:18.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:18.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.091734468s Jan 30 04:37:18.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:20.847: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.102113468s Jan 30 04:37:20.847: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:20.847: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.102109685s Jan 30 04:37:20.847: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:22.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.089507434s Jan 30 04:37:22.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:22.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.092069883s Jan 30 04:37:22.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:24.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.091792969s Jan 30 04:37:24.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:24.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.093436027s Jan 30 04:37:24.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:26.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.090878668s Jan 30 04:37:26.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:26.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.092720938s Jan 30 04:37:26.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:28.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.089660246s Jan 30 04:37:28.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:28.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.091364909s Jan 30 04:37:28.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:30.844: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.099440725s Jan 30 04:37:30.844: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.099340561s Jan 30 04:37:30.845: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:30.845: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:32.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.090402288s Jan 30 04:37:32.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:32.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.092255691s Jan 30 04:37:32.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:34.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.090413455s Jan 30 04:37:34.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:34.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.091712219s Jan 30 04:37:34.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:36.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.08930965s Jan 30 04:37:36.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:36.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.09196895s Jan 30 04:37:36.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:38.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.089813948s Jan 30 04:37:38.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:38.836: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.091322918s Jan 30 04:37:38.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:40.838: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.092624513s Jan 30 04:37:40.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:40.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.094343289s Jan 30 04:37:40.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:42.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.092296405s Jan 30 04:37:42.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.092397696s Jan 30 04:37:42.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:42.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:44.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.090788427s Jan 30 04:37:44.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:44.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.092344025s Jan 30 04:37:44.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:46.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.092122365s Jan 30 04:37:46.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:46.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.093378281s Jan 30 04:37:46.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:48.869: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.124084774s Jan 30 04:37:48.869: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:48.869: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.124232953s Jan 30 04:37:48.869: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:50.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.092315941s Jan 30 04:37:50.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:50.844: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.098691824s Jan 30 04:37:50.844: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:52.857: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.111674264s Jan 30 04:37:52.857: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:52.858: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.113320881s Jan 30 04:37:52.858: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:54.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.090030667s Jan 30 04:37:54.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:54.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.091967735s Jan 30 04:37:54.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:56.839: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.094260039s Jan 30 04:37:56.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:56.839: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.09423733s Jan 30 04:37:56.839: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:58.834: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.08924486s Jan 30 04:37:58.834: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:37:58.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.091800224s Jan 30 04:37:58.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:00.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.091401286s Jan 30 04:38:00.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:00.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.093243794s Jan 30 04:38:00.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:02.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.090713431s Jan 30 04:38:02.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:02.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.092304422s Jan 30 04:38:02.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:04.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.09006252s Jan 30 04:38:04.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:04.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.091961379s Jan 30 04:38:04.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:06.866: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.120754208s Jan 30 04:38:06.866: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:06.867: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.122154481s Jan 30 04:38:06.867: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:08.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.089386128s Jan 30 04:38:08.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:08.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.091774055s Jan 30 04:38:08.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:10.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.089780438s Jan 30 04:38:10.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:10.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.092771012s Jan 30 04:38:10.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:12.837: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.091371225s Jan 30 04:38:12.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:12.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.093001423s Jan 30 04:38:12.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:14.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.091164158s Jan 30 04:38:14.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:14.838: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.092879679s Jan 30 04:38:14.838: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:16.836: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.090382334s Jan 30 04:38:16.836: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:16.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.091887752s Jan 30 04:38:16.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards (Spec Runtime: 5m0.295s) test/e2e/cloud/gcp/reboot.go:136 In [It] (Node Runtime: 5m0s) test/e2e/cloud/gcp/reboot.go:136 Spec Goroutine goroutine 9252 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc000f15638?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f7f005c5620?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f7f005c5620?, 0xc0054c6680}, {0x8147128?, 0xc0039ce000}, {0xc0044681a0, 0x182}, 0xc004e36f60) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.7({0x7f7f005c5620, 0xc0054c6680}) test/e2e/cloud/gcp/reboot.go:141 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111f08?, 0xc0054c6680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 9241 [chan receive, 5 minutes] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x7f7f005c5620?, 0xc0054c6680}, {0x8147128?, 0xc0039ce000}, {0x76d190b, 0xb}, {0xc004daf7c0, 0x4, 0x4}, 0x45d964b800, ...) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReadyOrSucceeded(...) test/e2e/framework/pod/resource.go:508 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f7f005c5620, 0xc0054c6680}, {0x8147128, 0xc0039ce000}, {0x7fff5c36d5ea, 0x3}, {0xc00314f6c0, 0x1f}, {0xc0044681a0, 0x182}) test/e2e/cloud/gcp/reboot.go:284 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x2) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 30 04:38:18.835: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.090003218s Jan 30 04:38:18.835: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:18.837: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.092108893s Jan 30 04:38:18.837: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:18.877: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.131783255s Jan 30 04:38:18.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-vcng2' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:18.877: INFO: Pod kube-dns-autoscaler-5f6455f985-vcng2 failed to be running and ready, or succeeded. Jan 30 04:38:18.879: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.133483875s Jan 30 04:38:18.879: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-pr8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:22:22 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:23:10 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 04:04:43 +0000 UTC }] Jan 30 04:38:18.879: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 30 04:38:18.879: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [metadata-proxy-v0.1-wqvwp volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-vcng2 kube-proxy-bootstrap-e2e-minion-group-pr8s] Jan 30 04:38:18.879: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 04:04:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 04:22:22 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 04:23:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 04:04:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP: PodIPs:[] StartTime:2023-01-30 04:04:43 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-30 04:21:35 +0000 UTC,FinishedAt:2023-01-30 04:22:19 +0000 UTC,ContainerID:containerd://162925a21856075cf49544f632652025c5f137b0f1b380565b1993c1123b20f0,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:8 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://162925a21856075cf49544f632652025c5f137b0f1b380565b1993c1123b20f0 Started:0xc003a43057}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 30 04:38:18.944: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller: Jan 30 04:38:18.944: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller: Jan 30 04:38:18.944: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-vcng2: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 04:04:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 04:22:22 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 04:23:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 04:04:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP: PodIPs:[] StartTime:2023-01-30 04:04:43 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-30 04:21:35 +0000 UTC,FinishedAt:2023-01-30 04:22:20 +0000 UTC,ContainerID:containerd://f40f34057e4800a1fc4369d165588cdb4c4762269c1a14a8b2ede3897ee7b792,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:7 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://f40f34057e4800a1fc4369d165588cdb4c4762269c1a14a8b2ede3897ee7b792 Started:0xc003a42657}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 30 04:38:18.989: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-vcng2/autoscaler: Jan 30 04:38:18.989: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-vcng2/autoscaler: Jan 30 04:38:18.989: INFO: Node bootstrap-e2e-minion-group-pr8s failed reboot test. Jan 30 04:38:18.989: INFO: Executing termination hook on nodes Jan 30 04:38:18.989: INFO: Getting external IP address for bootstrap-e2e-minion-group-2w7z Jan 30 04:38:18.989: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-2w7z(34.83.14.121:22) Jan 30 04:38:19.508: INFO: ssh prow@34.83.14.121:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 04:38:19.508: INFO: ssh prow@34.83.14.121:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 04:33:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 04:38:19.508: INFO: ssh prow@34.83.14.121:22: stderr: "" Jan 30 04:38:19.508: INFO: ssh prow@34.83.14.121:22: exit code: 0 Jan 30 04:38:19.508: INFO: Getting external IP address for bootstrap-e2e-minion-group-8989 Jan 30 04:38:19.508: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-8989(34.145.88.234:22) Jan 30 04:38:20.029: INFO: ssh prow@34.145.88.234:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 04:38:20.029: INFO: ssh prow@34.145.88.234:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 04:33:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 04:38:20.029: INFO: ssh prow@34.145.88.234:22: stderr: "" Jan 30 04:38:20.029: INFO: ssh prow@34.145.88.234:22: exit code: 0 Jan 30 04:38:20.029: INFO: Getting external IP address for bootstrap-e2e-minion-group-pr8s Jan 30 04:38:20.029: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-pr8s(34.168.173.250:22) Jan 30 04:38:20.567: INFO: ssh prow@34.168.173.250:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 04:38:20.567: INFO: ssh prow@34.168.173.250:22: stdout: "" Jan 30 04:38:20.567: INFO: ssh prow@34.168.173.250:22: stderr: "cat: /tmp/drop-inbound.log: No such file or directory\n" Jan 30 04:38:20.567: INFO: ssh prow@34.168.173.250:22: exit code: 1 Jan 30 04:38:20.567: INFO: Error while issuing ssh command: failed running "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log": <nil> (exit code 1, stderr cat: /tmp/drop-inbound.log: No such file or directory ) [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 04:38:20.567 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/30/23 04:38:20.567 (5m2.059s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 04:38:20.567 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/30/23 04:38:20.567 Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-9vnqf to bootstrap-e2e-minion-group-pr8s Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.407024514s (3.407035047s including waiting) Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-9vnqf_kube-system(81e628a9-68fb-4bf9-a0f3-07efd15135df) Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Readiness probe failed: Get "http://10.64.3.15:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-9vnqf Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-9vnqf Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-ts65r to bootstrap-e2e-minion-group-8989 Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.048544975s (1.048559529s including waiting) Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Killing: Stopping container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-ts65r Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-ts65r Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Killing: Stopping container coredns Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f-ts65r: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-ts65r Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-9vnqf Jan 30 04:38:20.628: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-ts65r Jan 30 04:38:20.628: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 30 04:38:20.628: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 30 04:38:20.628: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 04:38:20.628: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 04:38:20.628: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 04:38:20.628: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 04:38:20.628: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: + exec /usr/local/bin/etcdctl --endpoints=127.0.0.1:2379 --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key --command-timeout=15s endpoint health Jan 30 04:38:20.628: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 30 04:38:20.628: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 04:38:20.628: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 04:38:20.628: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 04:38:20.628: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 30 04:38:20.628: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_e368a became leader Jan 30 04:38:20.628: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_a49a became leader Jan 30 04:38:20.628: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1499f became leader Jan 30 04:38:20.628: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ebdc5 became leader Jan 30 04:38:20.628: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_657e6 became leader Jan 30 04:38:20.628: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_80e5c became leader Jan 30 04:38:20.628: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1624 became leader Jan 30 04:38:20.628: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_8907e became leader Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-kfwd4 to bootstrap-e2e-minion-group-8989 Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 626.102447ms (626.126027ms including waiting) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Killing: Stopping container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-kfwd4_kube-system(dff52a9f-4523-49f5-adce-8d91398aa0ca) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Killing: Stopping container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-kfwd4_kube-system(dff52a9f-4523-49f5-adce-8d91398aa0ca) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Liveness probe failed: Get "http://10.64.2.16:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rzzz6 to bootstrap-e2e-minion-group-2w7z Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 649.515062ms (649.530311ms including waiting) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Liveness probe failed: Get "http://10.64.1.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Liveness probe failed: Get "http://10.64.1.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rzzz6_kube-system(907b1f90-0d41-4e45-be42-cb71fe53653b) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-wm5g7 to bootstrap-e2e-minion-group-pr8s Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 933.59456ms (933.605653ms including waiting) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container konnectivity-agent Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Failed: Error: failed to get sandbox container task: no running task found: task f5fb933e314e02e8c688680c6515433f89f38b11e6128a51e48c4bb125c4e747 not found: not found Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Liveness probe failed: Get "http://10.64.3.17:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Liveness probe failed: Get "http://10.64.3.17:8093/healthz": dial tcp 10.64.3.17:8093: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-wm5g7_kube-system(f4233209-4be0-4cd8-94ce-ced438d88b3f) Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-wm5g7 Jan 30 04:38:20.628: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rzzz6 Jan 30 04:38:20.628: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-kfwd4 Jan 30 04:38:20.628: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 30 04:38:20.628: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 30 04:38:20.628: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 30 04:38:20.628: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 30 04:38:20.628: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 30 04:38:20.628: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 30 04:38:20.628: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 30 04:38:20.628: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 30 04:38:20.628: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 30 04:38:20.628: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 30 04:38:20.628: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 04:38:20.628: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 30 04:38:20.628: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 30 04:38:20.628: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 30 04:38:20.628: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.628: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 30 04:38:20.628: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 30 04:38:20.628: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(d0b483a2668f277999bcc23ee75fc99e) Jan 30 04:38:20.628: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 30 04:38:20.628: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused Jan 30 04:38:20.628: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c6c2a8ca-a36c-403f-9999-a2b000b3920e became leader Jan 30 04:38:20.628: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_f291cda0-4aa2-4a2c-b2d0-0571517f319b became leader Jan 30 04:38:20.628: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c833dc2a-e038-4f45-b7ad-08d2638f0b9e became leader Jan 30 04:38:20.628: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2b9fcaeb-7945-435c-8c80-25e01cb35133 became leader Jan 30 04:38:20.628: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_01f555f6-f30b-41f2-b059-02563b17831c became leader Jan 30 04:38:20.628: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_6c1d9ef4-f4a8-4f03-97e6-e59820e12b3c became leader Jan 30 04:38:20.628: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_f84099c9-418f-455f-8646-356ff896dfa0 became leader Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-vcng2 to bootstrap-e2e-minion-group-pr8s Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.072385291s (3.072406123s including waiting) Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container autoscaler Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container autoscaler Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container autoscaler Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-vcng2_kube-system(5881f6ae-7dab-414e-bcbe-bad1b6578adb) Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-vcng2 Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-vcng2 Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-vcng2 Jan 30 04:38:20.628: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-2w7z_kube-system(de89eacf2d0b5006d7508757b58cec1d) Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-2w7z_kube-system(de89eacf2d0b5006d7508757b58cec1d) Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Killing: Stopping container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-8989_kube-system(7391456f443d7cab197930929fc65610) Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container kube-proxy Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.628: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-pr8s_kube-system(efb458e63148764d607d005f4ad36f66) Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container kube-proxy Jan 30 04:38:20.629: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-pr8s_kube-system(efb458e63148764d607d005f4ad36f66) Jan 30 04:38:20.629: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:38:20.629: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 30 04:38:20.629: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 30 04:38:20.629: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 30 04:38:20.629: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(f86a03c82069d9e676da0b89466a1071) Jan 30 04:38:20.629: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_832b3d7b-7090-4716-933e-249d446b7700 became leader Jan 30 04:38:20.629: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c8af7ead-b18f-4a96-ac6e-6319fcf78599 became leader Jan 30 04:38:20.629: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_66792645-ae07-449c-af44-6041568b48bf became leader Jan 30 04:38:20.629: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_57583faf-e90c-44f4-a218-579af7af084e became leader Jan 30 04:38:20.629: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e1367ebc-037a-4b50-866c-f11a6a850374 became leader Jan 30 04:38:20.629: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_5766692c-a1b5-4674-a69d-a922221c2db7 became leader Jan 30 04:38:20.629: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_f0787176-244e-4b86-847d-ff46a54637b1 became leader Jan 30 04:38:20.629: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c08b36c4-2380-4d34-932e-7ebb0016ce1b became leader Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-mh466 to bootstrap-e2e-minion-group-pr8s Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.406902484s (1.406912463s including waiting) Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container default-http-backend Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container default-http-backend Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Liveness probe failed: Get "http://10.64.3.6:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-mh466 Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-mh466 Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-mh466 Jan 30 04:38:20.629: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 30 04:38:20.629: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 30 04:38:20.629: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 30 04:38:20.629: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 30 04:38:20.629: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 30 04:38:20.629: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 30 04:38:20.629: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://10.138.0.2:8086/healthz": dial tcp 10.138.0.2:8086: connect: connection refused Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-27bcp to bootstrap-e2e-minion-group-8989 Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 799.8822ms (799.892307ms including waiting) Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.970227248s (1.970246647s including waiting) Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-27bcp: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-8zhwm to bootstrap-e2e-master Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 1.580995675s (1.581005335s including waiting) Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.012795189s (2.012805077s including waiting) Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-hhh7h to bootstrap-e2e-minion-group-2w7z Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 813.017719ms (813.044243ms including waiting) Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.959476196s (1.959487572s including waiting) Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-hhh7h: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-wqvwp to bootstrap-e2e-minion-group-pr8s Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 777.552996ms (777.571077ms including waiting) Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.037405647s (2.037416731s including waiting) Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container metadata-proxy Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container prometheus-to-sd-exporter Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-8zhwm Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-hhh7h Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-27bcp Jan 30 04:38:20.629: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-wqvwp Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-dv4lg to bootstrap-e2e-minion-group-pr8s Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.099427436s (2.099436859s including waiting) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container metrics-server Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container metrics-server Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.930381102s (2.930390437s including waiting) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container metrics-server-nanny Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container metrics-server-nanny Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container metrics-server Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container metrics-server-nanny Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-dv4lg Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-dv4lg Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-4d2dq to bootstrap-e2e-minion-group-2w7z Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.854731096s (1.854741893s including waiting) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metrics-server Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metrics-server Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.024346656s (1.024359447s including waiting) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metrics-server-nanny Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metrics-server-nanny Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container metrics-server Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container metrics-server-nanny Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-4d2dq_kube-system(255d13f5-f893-4d1d-9807-59a67e85d69e) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-4d2dq Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metrics-server Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metrics-server Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metrics-server-nanny Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metrics-server-nanny Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Readiness probe failed: Get "https://10.64.1.11:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Liveness probe failed: Get "https://10.64.1.11:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-4d2dq Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-4d2dq Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-4d2dq Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 30 04:38:20.629: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-pr8s Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.789389581s (3.789407141s including waiting) Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container volume-snapshot-controller Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container volume-snapshot-controller Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container volume-snapshot-controller Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(e18f5204-5261-40fa-8f57-029fca0d6f08) Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:38:20.629: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 04:38:20.629 (62ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 04:38:20.629 Jan 30 04:38:20.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 04:38:20.674 (45ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 04:38:20.674 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 04:38:20.674 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 04:38:20.674 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 04:38:20.675 STEP: Collecting events from namespace "reboot-8598". - test/e2e/framework/debug/dump.go:42 @ 01/30/23 04:38:20.675 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/30/23 04:38:20.718 Jan 30 04:38:20.760: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 04:38:20.760: INFO: Jan 30 04:38:20.809: INFO: Logging node info for node bootstrap-e2e-master Jan 30 04:38:20.853: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 6f4de288-21eb-465e-a25d-71a0f115d23a 4115 0 2023-01-30 04:04:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 04:04:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-30 04:04:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 04:04:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 04:36:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-gce-ci/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858378752 0} {<nil>} 3767948Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596234752 0} {<nil>} 3511948Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 04:04:43 +0000 UTC,LastTransitionTime:2023-01-30 04:04:43 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 04:36:28 +0000 UTC,LastTransitionTime:2023-01-30 04:04:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 04:36:28 +0000 UTC,LastTransitionTime:2023-01-30 04:04:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 04:36:28 +0000 UTC,LastTransitionTime:2023-01-30 04:04:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 04:36:28 +0000 UTC,LastTransitionTime:2023-01-30 04:04:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.49.246,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2f99bd7dadbd46f22ce4edb25d7437ee,SystemUUID:2f99bd7d-adbd-46f2-2ce4-edb25d7437ee,BootID:e341edb6-7aff-48fb-a607-613234201f7f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-11-g9857b5d1b,KubeletVersion:v1.27.0-alpha.1.80+97636ed7810137,KubeProxyVersion:v1.27.0-alpha.1.80+97636ed7810137,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:135961043,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:125279033,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:57551672,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 04:38:20.853: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 30 04:38:20.907: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 30 04:38:20.969: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-30 04:03:40 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:20.969: INFO: Container konnectivity-server-container ready: true, restart count 3 Jan 30 04:38:20.969: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-30 04:03:41 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:20.969: INFO: Container kube-scheduler ready: true, restart count 8 Jan 30 04:38:20.969: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-30 04:03:59 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:20.969: INFO: Container l7-lb-controller ready: true, restart count 9 Jan 30 04:38:20.969: INFO: metadata-proxy-v0.1-8zhwm started at 2023-01-30 04:05:00 +0000 UTC (0+2 container statuses recorded) Jan 30 04:38:20.969: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 04:38:20.969: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 04:38:20.969: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-30 04:03:40 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:20.969: INFO: Container etcd-container ready: true, restart count 3 Jan 30 04:38:20.969: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-30 04:03:40 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:20.969: INFO: Container kube-apiserver ready: true, restart count 2 Jan 30 04:38:20.969: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-30 04:03:40 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:20.969: INFO: Container kube-controller-manager ready: true, restart count 9 Jan 30 04:38:20.969: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-30 04:03:59 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:20.969: INFO: Container kube-addon-manager ready: true, restart count 2 Jan 30 04:38:20.969: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-30 04:03:40 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:20.969: INFO: Container etcd-container ready: true, restart count 5 Jan 30 04:38:21.141: INFO: Latency metrics for node bootstrap-e2e-master Jan 30 04:38:21.141: INFO: Logging node info for node bootstrap-e2e-minion-group-2w7z Jan 30 04:38:21.184: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-2w7z 0dc9c89e-8b35-476f-a0b5-71d6a867b027 4093 0 2023-01-30 04:04:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-2w7z kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 04:04:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 04:34:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 04:35:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 04:35:31 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{"f:address":{}},"k:{\"type\":\"InternalIP\"}":{"f:address":{}}},"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-30 04:35:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-gce-ci/us-west1-b/bootstrap-e2e-minion-group-2w7z,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 04:04:43 +0000 UTC,LastTransitionTime:2023-01-30 04:04:43 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 04:35:31 +0000 UTC,LastTransitionTime:2023-01-30 04:35:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 04:35:31 +0000 UTC,LastTransitionTime:2023-01-30 04:35:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 04:35:31 +0000 UTC,LastTransitionTime:2023-01-30 04:35:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 04:35:31 +0000 UTC,LastTransitionTime:2023-01-30 04:35:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.14.121,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-2w7z.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-2w7z.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3a2d81fdfb5c6322fd13b8b18a04da55,SystemUUID:3a2d81fd-fb5c-6322-fd13-b8b18a04da55,BootID:9a5d1d65-a056-45ad-aa38-e844f34fefec,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-11-g9857b5d1b,KubeletVersion:v1.27.0-alpha.1.80+97636ed7810137,KubeProxyVersion:v1.27.0-alpha.1.80+97636ed7810137,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 04:38:21.184: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-2w7z Jan 30 04:38:21.235: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-2w7z Jan 30 04:38:21.300: INFO: konnectivity-agent-rzzz6 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.300: INFO: Container konnectivity-agent ready: false, restart count 6 Jan 30 04:38:21.300: INFO: metrics-server-v0.5.2-867b8754b9-4d2dq started at 2023-01-30 04:05:06 +0000 UTC (0+2 container statuses recorded) Jan 30 04:38:21.300: INFO: Container metrics-server ready: false, restart count 4 Jan 30 04:38:21.300: INFO: Container metrics-server-nanny ready: false, restart count 5 Jan 30 04:38:21.300: INFO: kube-proxy-bootstrap-e2e-minion-group-2w7z started at 2023-01-30 04:04:29 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.300: INFO: Container kube-proxy ready: true, restart count 8 Jan 30 04:38:21.300: INFO: metadata-proxy-v0.1-hhh7h started at 2023-01-30 04:04:30 +0000 UTC (0+2 container statuses recorded) Jan 30 04:38:21.300: INFO: Container metadata-proxy ready: true, restart count 3 Jan 30 04:38:21.300: INFO: Container prometheus-to-sd-exporter ready: true, restart count 3 Jan 30 04:38:21.475: INFO: Latency metrics for node bootstrap-e2e-minion-group-2w7z Jan 30 04:38:21.475: INFO: Logging node info for node bootstrap-e2e-minion-group-8989 Jan 30 04:38:21.518: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-8989 4a75ddd1-ef06-47df-ade8-574d74cb42ab 4092 0 2023-01-30 04:04:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-8989 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 04:04:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 04:34:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 04:35:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 04:35:31 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-30 04:35:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-gce-ci/us-west1-b/bootstrap-e2e-minion-group-8989,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 04:35:09 +0000 UTC,LastTransitionTime:2023-01-30 04:29:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 04:04:43 +0000 UTC,LastTransitionTime:2023-01-30 04:04:43 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 04:35:31 +0000 UTC,LastTransitionTime:2023-01-30 04:35:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 04:35:31 +0000 UTC,LastTransitionTime:2023-01-30 04:35:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 04:35:31 +0000 UTC,LastTransitionTime:2023-01-30 04:35:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 04:35:31 +0000 UTC,LastTransitionTime:2023-01-30 04:35:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.145.88.234,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-8989.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-8989.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dd7b470886f95ec884f70a2ac96a6ad7,SystemUUID:dd7b4708-86f9-5ec8-84f7-0a2ac96a6ad7,BootID:acf91d68-25f3-43f8-801b-71f8229c37d6,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-11-g9857b5d1b,KubeletVersion:v1.27.0-alpha.1.80+97636ed7810137,KubeProxyVersion:v1.27.0-alpha.1.80+97636ed7810137,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 04:38:21.518: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-8989 Jan 30 04:38:21.565: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-8989 Jan 30 04:38:21.629: INFO: konnectivity-agent-kfwd4 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.629: INFO: Container konnectivity-agent ready: true, restart count 11 Jan 30 04:38:21.629: INFO: coredns-6846b5b5f-ts65r started at 2023-01-30 04:04:52 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.629: INFO: Container coredns ready: false, restart count 6 Jan 30 04:38:21.629: INFO: kube-proxy-bootstrap-e2e-minion-group-8989 started at 2023-01-30 04:04:33 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.629: INFO: Container kube-proxy ready: true, restart count 6 Jan 30 04:38:21.629: INFO: metadata-proxy-v0.1-27bcp started at 2023-01-30 04:04:34 +0000 UTC (0+2 container statuses recorded) Jan 30 04:38:21.629: INFO: Container metadata-proxy ready: true, restart count 3 Jan 30 04:38:21.629: INFO: Container prometheus-to-sd-exporter ready: true, restart count 3 Jan 30 04:38:21.794: INFO: Latency metrics for node bootstrap-e2e-minion-group-8989 Jan 30 04:38:21.794: INFO: Logging node info for node bootstrap-e2e-minion-group-pr8s Jan 30 04:38:21.837: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-pr8s 3cd1e5e1-5c3f-4d16-a492-09b76a02380e 4311 0 2023-01-30 04:04:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-pr8s kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 04:04:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 04:22:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 04:23:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 04:33:21 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-30 04:38:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-gce-ci/us-west1-b/bootstrap-e2e-minion-group-pr8s,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 04:38:11 +0000 UTC,LastTransitionTime:2023-01-30 04:23:08 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 04:38:11 +0000 UTC,LastTransitionTime:2023-01-30 04:23:08 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 04:38:11 +0000 UTC,LastTransitionTime:2023-01-30 04:23:08 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 04:38:11 +0000 UTC,LastTransitionTime:2023-01-30 04:23:08 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 04:38:11 +0000 UTC,LastTransitionTime:2023-01-30 04:23:08 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 04:38:11 +0000 UTC,LastTransitionTime:2023-01-30 04:23:08 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 04:38:11 +0000 UTC,LastTransitionTime:2023-01-30 04:23:08 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 04:04:43 +0000 UTC,LastTransitionTime:2023-01-30 04:04:43 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 04:33:21 +0000 UTC,LastTransitionTime:2023-01-30 04:23:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 04:33:21 +0000 UTC,LastTransitionTime:2023-01-30 04:23:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 04:33:21 +0000 UTC,LastTransitionTime:2023-01-30 04:23:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 04:33:21 +0000 UTC,LastTransitionTime:2023-01-30 04:23:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.173.250,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-pr8s.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-pr8s.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9bbcb548c0d5032d8123f6780ca06f95,SystemUUID:9bbcb548-c0d5-032d-8123-f6780ca06f95,BootID:20621036-c2c2-44b6-993d-20c8d5436e83,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-11-g9857b5d1b,KubeletVersion:v1.27.0-alpha.1.80+97636ed7810137,KubeProxyVersion:v1.27.0-alpha.1.80+97636ed7810137,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 04:38:21.838: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-pr8s Jan 30 04:38:21.885: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-pr8s Jan 30 04:38:21.938: INFO: konnectivity-agent-wm5g7 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.938: INFO: Container konnectivity-agent ready: false, restart count 6 Jan 30 04:38:21.938: INFO: kube-proxy-bootstrap-e2e-minion-group-pr8s started at 2023-01-30 04:04:33 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.938: INFO: Container kube-proxy ready: true, restart count 11 Jan 30 04:38:21.938: INFO: l7-default-backend-8549d69d99-mh466 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.938: INFO: Container default-http-backend ready: false, restart count 2 Jan 30 04:38:21.938: INFO: kube-dns-autoscaler-5f6455f985-vcng2 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.938: INFO: Container autoscaler ready: false, restart count 7 Jan 30 04:38:21.938: INFO: volume-snapshot-controller-0 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.938: INFO: Container volume-snapshot-controller ready: false, restart count 8 Jan 30 04:38:21.938: INFO: coredns-6846b5b5f-9vnqf started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:38:21.938: INFO: Container coredns ready: false, restart count 6 Jan 30 04:38:21.938: INFO: metadata-proxy-v0.1-wqvwp started at 2023-01-30 04:04:34 +0000 UTC (0+2 container statuses recorded) Jan 30 04:38:21.938: INFO: Container metadata-proxy ready: true, restart count 2 Jan 30 04:38:21.938: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 30 04:38:22.103: INFO: Latency metrics for node bootstrap-e2e-minion-group-pr8s END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 04:38:22.103 (1.428s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 04:38:22.103 (1.429s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 04:38:22.103 STEP: Destroying namespace "reboot-8598" for this suite. - test/e2e/framework/framework.go:347 @ 01/30/23 04:38:22.103 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 04:38:22.148 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 04:38:22.149 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 04:38:22.149 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 04:08:36.556from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 04:06:17.967 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 04:06:17.967 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 04:06:17.967 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/30/23 04:06:17.967 Jan 30 04:06:17.967: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/30/23 04:06:17.968 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/30/23 04:06:18.098 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/30/23 04:06:18.182 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 04:06:18.265 (298ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 04:06:18.265 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 04:06:18.265 (0s) > Enter [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/30/23 04:06:18.265 Jan 30 04:06:18.431: INFO: Getting bootstrap-e2e-minion-group-8989 Jan 30 04:06:18.431: INFO: Getting bootstrap-e2e-minion-group-pr8s Jan 30 04:06:18.431: INFO: Getting bootstrap-e2e-minion-group-2w7z Jan 30 04:06:18.478: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-2w7z condition Ready to be true Jan 30 04:06:18.478: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-pr8s condition Ready to be true Jan 30 04:06:18.481: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-8989 condition Ready to be true Jan 30 04:06:18.529: INFO: Node bootstrap-e2e-minion-group-pr8s has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-vcng2 kube-proxy-bootstrap-e2e-minion-group-pr8s metadata-proxy-v0.1-wqvwp volume-snapshot-controller-0] Jan 30 04:06:18.529: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-vcng2 kube-proxy-bootstrap-e2e-minion-group-pr8s metadata-proxy-v0.1-wqvwp volume-snapshot-controller-0] Jan 30 04:06:18.529: INFO: Node bootstrap-e2e-minion-group-8989 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-8989 metadata-proxy-v0.1-27bcp] Jan 30 04:06:18.529: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-8989 metadata-proxy-v0.1-27bcp] Jan 30 04:06:18.529: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:06:18.529: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-27bcp" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:06:18.530: INFO: Node bootstrap-e2e-minion-group-2w7z has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-2w7z metadata-proxy-v0.1-hhh7h] Jan 30 04:06:18.530: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-2w7z metadata-proxy-v0.1-hhh7h] Jan 30 04:06:18.530: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-hhh7h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:06:18.530: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-2w7z" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:06:18.530: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-8989" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:06:18.530: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-vcng2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:06:18.530: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-pr8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:06:18.530: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-wqvwp" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:06:18.590: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 60.799268ms Jan 30 04:06:18.590: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 30 04:06:18.592: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=true. Elapsed: 61.831175ms Jan 30 04:06:18.592: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2" satisfied condition "running and ready, or succeeded" Jan 30 04:06:18.593: INFO: Pod "metadata-proxy-v0.1-hhh7h": Phase="Running", Reason="", readiness=true. Elapsed: 63.543094ms Jan 30 04:06:18.593: INFO: Pod "metadata-proxy-v0.1-hhh7h" satisfied condition "running and ready, or succeeded" Jan 30 04:06:18.593: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-2w7z": Phase="Running", Reason="", readiness=true. Elapsed: 63.535598ms Jan 30 04:06:18.593: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-2w7z" satisfied condition "running and ready, or succeeded" Jan 30 04:06:18.593: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-2w7z metadata-proxy-v0.1-hhh7h] Jan 30 04:06:18.593: INFO: Getting external IP address for bootstrap-e2e-minion-group-2w7z Jan 30 04:06:18.593: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-2w7z(34.83.14.121:22) Jan 30 04:06:18.593: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8989": Phase="Running", Reason="", readiness=true. Elapsed: 63.587113ms Jan 30 04:06:18.593: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8989" satisfied condition "running and ready, or succeeded" Jan 30 04:06:18.593: INFO: Pod "metadata-proxy-v0.1-wqvwp": Phase="Running", Reason="", readiness=true. Elapsed: 63.594916ms Jan 30 04:06:18.593: INFO: Pod "metadata-proxy-v0.1-wqvwp" satisfied condition "running and ready, or succeeded" Jan 30 04:06:18.593: INFO: Pod "metadata-proxy-v0.1-27bcp": Phase="Running", Reason="", readiness=true. Elapsed: 63.9296ms Jan 30 04:06:18.593: INFO: Pod "metadata-proxy-v0.1-27bcp" satisfied condition "running and ready, or succeeded" Jan 30 04:06:18.593: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-8989 metadata-proxy-v0.1-27bcp] Jan 30 04:06:18.593: INFO: Getting external IP address for bootstrap-e2e-minion-group-8989 Jan 30 04:06:18.593: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-pr8s": Phase="Running", Reason="", readiness=true. Elapsed: 63.755288ms Jan 30 04:06:18.593: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-pr8s" satisfied condition "running and ready, or succeeded" Jan 30 04:06:18.594: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-vcng2 kube-proxy-bootstrap-e2e-minion-group-pr8s metadata-proxy-v0.1-wqvwp volume-snapshot-controller-0] Jan 30 04:06:18.594: INFO: Getting external IP address for bootstrap-e2e-minion-group-pr8s Jan 30 04:06:18.593: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-8989(34.145.88.234:22) Jan 30 04:06:18.594: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-pr8s(34.168.173.250:22) Jan 30 04:06:19.117: INFO: ssh prow@34.83.14.121:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 30 04:06:19.117: INFO: ssh prow@34.83.14.121:22: stdout: "" Jan 30 04:06:19.117: INFO: ssh prow@34.83.14.121:22: stderr: "" Jan 30 04:06:19.117: INFO: ssh prow@34.83.14.121:22: exit code: 0 Jan 30 04:06:19.117: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-2w7z condition Ready to be false Jan 30 04:06:19.117: INFO: ssh prow@34.145.88.234:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 30 04:06:19.117: INFO: ssh prow@34.145.88.234:22: stdout: "" Jan 30 04:06:19.117: INFO: ssh prow@34.145.88.234:22: stderr: "" Jan 30 04:06:19.117: INFO: ssh prow@34.145.88.234:22: exit code: 0 Jan 30 04:06:19.117: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-8989 condition Ready to be false Jan 30 04:06:19.121: INFO: ssh prow@34.168.173.250:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 30 04:06:19.121: INFO: ssh prow@34.168.173.250:22: stdout: "" Jan 30 04:06:19.121: INFO: ssh prow@34.168.173.250:22: stderr: "" Jan 30 04:06:19.121: INFO: ssh prow@34.168.173.250:22: exit code: 0 Jan 30 04:06:19.121: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-pr8s condition Ready to be false Jan 30 04:06:19.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:19.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:19.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:21.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:21.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:21.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:23.347: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:23.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:23.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:25.391: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:25.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:25.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:27.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:27.439: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:27.439: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:29.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:29.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:29.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:31.522: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:31.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:31.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:33.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:33.582: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:33.582: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:35.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:35.626: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:35.626: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:37.657: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:37.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:37.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:39.700: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:39.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:39.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:41.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:41.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:41.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:43.784: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:06:43.802: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:06:43.802: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:06:45.824: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:06:45.842: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:06:45.842: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:06:47.865: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:06:47.882: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:06:47.882: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:06:49.906: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:06:49.923: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:06:49.923: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:06:51.948: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:06:51.963: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:06:51.963: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:06:53.987: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:06:54.002: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:06:54.003: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:06:56.028: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:06:56.042: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:06:56.043: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:06:58.069: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:06:58.083: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:06:58.084: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:07:00.110: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:07:00.123: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:07:00.123: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:07:02.150: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:07:02.164: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:07:02.164: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:07:04.190: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:07:04.204: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:07:04.204: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:07:06.230: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:07:06.244: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:07:06.244: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:07:08.271: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:07:08.285: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:07:08.285: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:07:10.312: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:07:10.325: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:07:10.325: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:07:12.353: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:07:12.365: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:07:12.365: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:07:20.059: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:20.059: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:20.060: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:22.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:22.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:22.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:24.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:24.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:24.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:26.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:26.318: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:26.318: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:28.364: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:28.364: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:28.365: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:30.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:30.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:30.411: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:32.458: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:32.458: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:32.459: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:34.506: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:34.506: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:34.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:36.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:36.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:36.553: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:38.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:38.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:38.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:40.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:40.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:40.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:42.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:42.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:42.700: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:44.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:44.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:44.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:46.794: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:46.795: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:46.795: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:48.837: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:48.843: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:48.843: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:50.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:50.891: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:50.891: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:52.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:52.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:52.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:54.969: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:54.986: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:54.986: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:57.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:57.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:57.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:59.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:59.082: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:59.082: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:01.101: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:01.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:01.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:03.144: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:03.191: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:03.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:05.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:05.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:05.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:07.233: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:07.289: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:07.289: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:09.276: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:09.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:09.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:11.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:11.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:11.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:13.364: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:13.433: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:13.433: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:15.409: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:15.486: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:15.486: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:17.454: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:17.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:17.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:19.455: INFO: Node bootstrap-e2e-minion-group-8989 didn't reach desired Ready condition status (false) within 2m0s Jan 30 04:08:19.535: INFO: Node bootstrap-e2e-minion-group-pr8s didn't reach desired Ready condition status (false) within 2m0s Jan 30 04:08:19.536: INFO: Node bootstrap-e2e-minion-group-2w7z didn't reach desired Ready condition status (false) within 2m0s Jan 30 04:08:19.536: INFO: Node bootstrap-e2e-minion-group-2w7z failed reboot test. Jan 30 04:08:19.536: INFO: Node bootstrap-e2e-minion-group-8989 failed reboot test. Jan 30 04:08:19.536: INFO: Node bootstrap-e2e-minion-group-pr8s failed reboot test. Jan 30 04:08:19.536: INFO: Executing termination hook on nodes Jan 30 04:08:19.536: INFO: Getting external IP address for bootstrap-e2e-minion-group-2w7z Jan 30 04:08:19.536: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-2w7z(34.83.14.121:22) Jan 30 04:08:35.493: INFO: ssh prow@34.83.14.121:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 30 04:08:35.493: INFO: ssh prow@34.83.14.121:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 04:06:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 04:08:35.493: INFO: ssh prow@34.83.14.121:22: stderr: "" Jan 30 04:08:35.493: INFO: ssh prow@34.83.14.121:22: exit code: 0 Jan 30 04:08:35.493: INFO: Getting external IP address for bootstrap-e2e-minion-group-8989 Jan 30 04:08:35.493: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-8989(34.145.88.234:22) Jan 30 04:08:36.022: INFO: ssh prow@34.145.88.234:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 30 04:08:36.022: INFO: ssh prow@34.145.88.234:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 04:06:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 04:08:36.022: INFO: ssh prow@34.145.88.234:22: stderr: "" Jan 30 04:08:36.022: INFO: ssh prow@34.145.88.234:22: exit code: 0 Jan 30 04:08:36.022: INFO: Getting external IP address for bootstrap-e2e-minion-group-pr8s Jan 30 04:08:36.022: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-pr8s(34.168.173.250:22) Jan 30 04:08:36.555: INFO: ssh prow@34.168.173.250:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 30 04:08:36.555: INFO: ssh prow@34.168.173.250:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 04:06:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 04:08:36.555: INFO: ssh prow@34.168.173.250:22: stderr: "" Jan 30 04:08:36.555: INFO: ssh prow@34.168.173.250:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 04:08:36.556 < Exit [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/30/23 04:08:36.556 (2m18.291s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 04:08:36.556 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/30/23 04:08:36.556 Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-9vnqf to bootstrap-e2e-minion-group-pr8s Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.407024514s (3.407035047s including waiting) Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container coredns Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container coredns Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container coredns Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-9vnqf_kube-system(81e628a9-68fb-4bf9-a0f3-07efd15135df) Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Readiness probe failed: Get "http://10.64.3.15:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-ts65r to bootstrap-e2e-minion-group-8989 Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.048544975s (1.048559529s including waiting) Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container coredns Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container coredns Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Killing: Stopping container coredns Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-9vnqf Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-ts65r Jan 30 04:08:36.606: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 30 04:08:36.606: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 30 04:08:36.606: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 04:08:36.606: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 04:08:36.606: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 04:08:36.606: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 30 04:08:36.606: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_e368a became leader Jan 30 04:08:36.606: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_a49a became leader Jan 30 04:08:36.606: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1499f became leader Jan 30 04:08:36.606: INFO: event for konnectivity-agent-kfwd4: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-kfwd4 to bootstrap-e2e-minion-group-8989 Jan 30 04:08:36.606: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 04:08:36.606: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 626.102447ms (626.126027ms including waiting) Jan 30 04:08:36.606: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container konnectivity-agent Jan 30 04:08:36.606: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container konnectivity-agent Jan 30 04:08:36.606: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 04:08:36.606: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rzzz6 to bootstrap-e2e-minion-group-2w7z Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 649.515062ms (649.530311ms including waiting) Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container konnectivity-agent Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container konnectivity-agent Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container konnectivity-agent Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Liveness probe failed: Get "http://10.64.1.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Liveness probe failed: Get "http://10.64.1.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-wm5g7 to bootstrap-e2e-minion-group-pr8s Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 933.59456ms (933.605653ms including waiting) Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container konnectivity-agent Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container konnectivity-agent Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container konnectivity-agent Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Failed: Error: failed to get sandbox container task: no running task found: task f5fb933e314e02e8c688680c6515433f89f38b11e6128a51e48c4bb125c4e747 not found: not found Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Liveness probe failed: Get "http://10.64.3.17:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Liveness probe failed: Get "http://10.64.3.17:8093/healthz": dial tcp 10.64.3.17:8093: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-wm5g7 Jan 30 04:08:36.606: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rzzz6 Jan 30 04:08:36.606: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-kfwd4 Jan 30 04:08:36.606: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 30 04:08:36.606: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 30 04:08:36.606: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 30 04:08:36.606: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:08:36.606: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 30 04:08:36.606: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 30 04:08:36.606: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(d0b483a2668f277999bcc23ee75fc99e) Jan 30 04:08:36.606: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c6c2a8ca-a36c-403f-9999-a2b000b3920e became leader Jan 30 04:08:36.606: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_f291cda0-4aa2-4a2c-b2d0-0571517f319b became leader Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-vcng2 to bootstrap-e2e-minion-group-pr8s Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.072385291s (3.072406123s including waiting) Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container autoscaler Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container autoscaler Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container autoscaler Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-vcng2_kube-system(5881f6ae-7dab-414e-bcbe-bad1b6578adb) Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-vcng2 Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-2w7z_kube-system(de89eacf2d0b5006d7508757b58cec1d) Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Killing: Stopping container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-8989_kube-system(7391456f443d7cab197930929fc65610) Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:08:36.606: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 30 04:08:36.606: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 30 04:08:36.606: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_832b3d7b-7090-4716-933e-249d446b7700 became leader Jan 30 04:08:36.606: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c8af7ead-b18f-4a96-ac6e-6319fcf78599 became leader Jan 30 04:08:36.606: INFO: event for l7-default-backend-8549d69d99-mh466: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:08:36.606: INFO: event for l7-default-backend-8549d69d99-mh466: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:08:36.606: INFO: event for l7-default-backend-8549d69d99-mh466: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-mh466 to bootstrap-e2e-minion-group-pr8s Jan 30 04:08:36.606: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 30 04:08:36.606: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.406902484s (1.406912463s including waiting) Jan 30 04:08:36.606: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container default-http-backend Jan 30 04:08:36.606: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container default-http-backend Jan 30 04:08:36.606: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Liveness probe failed: Get "http://10.64.3.6:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 30 04:08:36.606: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 04:08:36.606: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-mh466 Jan 30 04:08:36.606: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 30 04:08:36.606: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 30 04:08:36.606: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 30 04:08:36.606: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 30 04:08:36.606: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 30 04:08:36.606: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 30 04:08:36.606: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-27bcp: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-27bcp to bootstrap-e2e-minion-group-8989 Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 799.8822ms (799.892307ms including waiting) Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container metadata-proxy Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container metadata-proxy Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.970227248s (1.970246647s including waiting) Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container prometheus-to-sd-exporter Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-27bcp: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container prometheus-to-sd-exporter Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-8zhwm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-8zhwm to bootstrap-e2e-master Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 1.580995675s (1.581005335s including waiting) Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.012795189s (2.012805077s including waiting) Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-8zhwm: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-hhh7h: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-hhh7h to bootstrap-e2e-minion-group-2w7z Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 813.017719ms (813.044243ms including waiting) Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metadata-proxy Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metadata-proxy Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.959476196s (1.959487572s including waiting) Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container prometheus-to-sd-exporter Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-hhh7h: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container prometheus-to-sd-exporter Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-wqvwp: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-wqvwp to bootstrap-e2e-minion-group-pr8s Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 777.552996ms (777.571077ms including waiting) Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container metadata-proxy Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container metadata-proxy Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.037405647s (2.037416731s including waiting) Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container prometheus-to-sd-exporter Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1-wqvwp: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container prometheus-to-sd-exporter Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-8zhwm Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-hhh7h Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-27bcp Jan 30 04:08:36.606: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-wqvwp Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-dv4lg to bootstrap-e2e-minion-group-pr8s Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.099427436s (2.099436859s including waiting) Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container metrics-server Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container metrics-server Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.930381102s (2.930390437s including waiting) Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container metrics-server-nanny Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container metrics-server-nanny Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container metrics-server Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container metrics-server-nanny Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c-dv4lg: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-dv4lg Jan 30 04:08:36.606: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-dv4lg Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-4d2dq to bootstrap-e2e-minion-group-2w7z Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.854731096s (1.854741893s including waiting) Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metrics-server Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metrics-server Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.024346656s (1.024359447s including waiting) Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container metrics-server-nanny Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container metrics-server-nanny Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container metrics-server Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container metrics-server-nanny Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9-4d2dq: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-4d2dq Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 30 04:08:36.607: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 30 04:08:36.607: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:08:36.607: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:08:36.607: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-pr8s Jan 30 04:08:36.607: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 30 04:08:36.607: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.789389581s (3.789407141s including waiting) Jan 30 04:08:36.607: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container volume-snapshot-controller Jan 30 04:08:36.607: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container volume-snapshot-controller Jan 30 04:08:36.607: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container volume-snapshot-controller Jan 30 04:08:36.607: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.607: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 04:08:36.607: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(e18f5204-5261-40fa-8f57-029fca0d6f08) Jan 30 04:08:36.607: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 04:08:36.607 (51ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 04:08:36.607 Jan 30 04:08:36.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 04:08:36.656 (49ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 04:08:36.656 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 04:08:36.656 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 04:08:36.656 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 04:08:36.656 STEP: Collecting events from namespace "reboot-639". - test/e2e/framework/debug/dump.go:42 @ 01/30/23 04:08:36.656 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/30/23 04:08:36.697 Jan 30 04:08:36.739: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 04:08:36.739: INFO: Jan 30 04:08:36.789: INFO: Logging node info for node bootstrap-e2e-master Jan 30 04:08:36.834: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 6f4de288-21eb-465e-a25d-71a0f115d23a 733 0 2023-01-30 04:04:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 04:04:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-30 04:04:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 04:04:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 04:05:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-gce-ci/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858378752 0} {<nil>} 3767948Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596234752 0} {<nil>} 3511948Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 04:04:43 +0000 UTC,LastTransitionTime:2023-01-30 04:04:43 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 04:05:18 +0000 UTC,LastTransitionTime:2023-01-30 04:04:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 04:05:18 +0000 UTC,LastTransitionTime:2023-01-30 04:04:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 04:05:18 +0000 UTC,LastTransitionTime:2023-01-30 04:04:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 04:05:18 +0000 UTC,LastTransitionTime:2023-01-30 04:04:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.49.246,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2f99bd7dadbd46f22ce4edb25d7437ee,SystemUUID:2f99bd7d-adbd-46f2-2ce4-edb25d7437ee,BootID:e341edb6-7aff-48fb-a607-613234201f7f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-11-g9857b5d1b,KubeletVersion:v1.27.0-alpha.1.80+97636ed7810137,KubeProxyVersion:v1.27.0-alpha.1.80+97636ed7810137,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:135961043,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:125279033,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:57551672,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 04:08:36.835: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 30 04:08:36.880: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 30 04:08:36.955: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-30 04:03:40 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:36.955: INFO: Container etcd-container ready: true, restart count 0 Jan 30 04:08:36.955: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-30 04:03:40 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:36.955: INFO: Container konnectivity-server-container ready: true, restart count 0 Jan 30 04:08:36.955: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-30 04:03:41 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:36.955: INFO: Container kube-scheduler ready: true, restart count 1 Jan 30 04:08:36.955: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-30 04:03:59 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:36.955: INFO: Container l7-lb-controller ready: true, restart count 4 Jan 30 04:08:36.955: INFO: metadata-proxy-v0.1-8zhwm started at 2023-01-30 04:05:00 +0000 UTC (0+2 container statuses recorded) Jan 30 04:08:36.955: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 04:08:36.955: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 04:08:36.955: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-30 04:03:40 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:36.955: INFO: Container etcd-container ready: true, restart count 2 Jan 30 04:08:36.955: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-30 04:03:40 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:36.955: INFO: Container kube-apiserver ready: true, restart count 1 Jan 30 04:08:36.955: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-30 04:03:40 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:36.955: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 30 04:08:36.955: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-30 04:03:59 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:36.955: INFO: Container kube-addon-manager ready: true, restart count 0 Jan 30 04:08:37.199: INFO: Latency metrics for node bootstrap-e2e-master Jan 30 04:08:37.199: INFO: Logging node info for node bootstrap-e2e-minion-group-2w7z Jan 30 04:08:37.242: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-2w7z 0dc9c89e-8b35-476f-a0b5-71d6a867b027 1095 0 2023-01-30 04:04:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-2w7z kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-01-30 04:04:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 04:04:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-01-30 04:04:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 04:04:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 04:08:29 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{"f:address":{}},"k:{\"type\":\"InternalIP\"}":{"f:address":{}}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-gce-ci/us-west1-b/bootstrap-e2e-minion-group-2w7z,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 04:04:33 +0000 UTC,LastTransitionTime:2023-01-30 04:04:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 04:04:33 +0000 UTC,LastTransitionTime:2023-01-30 04:04:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 04:04:33 +0000 UTC,LastTransitionTime:2023-01-30 04:04:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 04:04:33 +0000 UTC,LastTransitionTime:2023-01-30 04:04:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 04:04:33 +0000 UTC,LastTransitionTime:2023-01-30 04:04:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 04:04:33 +0000 UTC,LastTransitionTime:2023-01-30 04:04:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 04:04:33 +0000 UTC,LastTransitionTime:2023-01-30 04:04:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 04:04:43 +0000 UTC,LastTransitionTime:2023-01-30 04:04:43 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 04:08:29 +0000 UTC,LastTransitionTime:2023-01-30 04:04:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 04:08:29 +0000 UTC,LastTransitionTime:2023-01-30 04:04:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 04:08:29 +0000 UTC,LastTransitionTime:2023-01-30 04:04:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 04:08:29 +0000 UTC,LastTransitionTime:2023-01-30 04:04:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:,},NodeAddress{Type:InternalDNS,Address:,},NodeAddress{Type:Hostname,Address:,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3a2d81fdfb5c6322fd13b8b18a04da55,SystemUUID:3a2d81fd-fb5c-6322-fd13-b8b18a04da55,BootID:214aadca-6e16-4b48-bb6d-5b0b31f8bfcf,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-11-g9857b5d1b,KubeletVersion:v1.27.0-alpha.1.80+97636ed7810137,KubeProxyVersion:v1.27.0-alpha.1.80+97636ed7810137,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 04:08:37.243: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-2w7z Jan 30 04:08:37.287: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-2w7z Jan 30 04:08:37.331: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-2w7z: lookup : no such host Jan 30 04:08:37.331: INFO: Logging node info for node bootstrap-e2e-minion-group-8989 Jan 30 04:08:37.375: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-8989 4a75ddd1-ef06-47df-ade8-574d74cb42ab 670 0 2023-01-30 04:04:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-8989 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 04:04:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 04:04:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-30 04:04:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 04:04:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 04:05:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-gce-ci/us-west1-b/bootstrap-e2e-minion-group-8989,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 04:04:37 +0000 UTC,LastTransitionTime:2023-01-30 04:04:36 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 04:04:37 +0000 UTC,LastTransitionTime:2023-01-30 04:04:36 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 04:04:37 +0000 UTC,LastTransitionTime:2023-01-30 04:04:36 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 04:04:37 +0000 UTC,LastTransitionTime:2023-01-30 04:04:36 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 04:04:37 +0000 UTC,LastTransitionTime:2023-01-30 04:04:36 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 04:04:37 +0000 UTC,LastTransitionTime:2023-01-30 04:04:36 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 04:04:37 +0000 UTC,LastTransitionTime:2023-01-30 04:04:36 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 04:04:43 +0000 UTC,LastTransitionTime:2023-01-30 04:04:43 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 04:05:03 +0000 UTC,LastTransitionTime:2023-01-30 04:04:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 04:05:03 +0000 UTC,LastTransitionTime:2023-01-30 04:04:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 04:05:03 +0000 UTC,LastTransitionTime:2023-01-30 04:04:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 04:05:03 +0000 UTC,LastTransitionTime:2023-01-30 04:04:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.145.88.234,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-8989.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-8989.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dd7b470886f95ec884f70a2ac96a6ad7,SystemUUID:dd7b4708-86f9-5ec8-84f7-0a2ac96a6ad7,BootID:d1a572fe-da4f-4505-9c50-e99dc674472a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-11-g9857b5d1b,KubeletVersion:v1.27.0-alpha.1.80+97636ed7810137,KubeProxyVersion:v1.27.0-alpha.1.80+97636ed7810137,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 04:08:37.375: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-8989 Jan 30 04:08:37.419: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-8989 Jan 30 04:08:37.484: INFO: kube-proxy-bootstrap-e2e-minion-group-8989 started at 2023-01-30 04:04:33 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:37.484: INFO: Container kube-proxy ready: true, restart count 3 Jan 30 04:08:37.484: INFO: metadata-proxy-v0.1-27bcp started at 2023-01-30 04:04:34 +0000 UTC (0+2 container statuses recorded) Jan 30 04:08:37.484: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 04:08:37.484: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 04:08:37.484: INFO: konnectivity-agent-kfwd4 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:37.484: INFO: Container konnectivity-agent ready: true, restart count 2 Jan 30 04:08:37.484: INFO: coredns-6846b5b5f-ts65r started at 2023-01-30 04:04:52 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:37.484: INFO: Container coredns ready: true, restart count 2 Jan 30 04:08:37.663: INFO: Latency metrics for node bootstrap-e2e-minion-group-8989 Jan 30 04:08:37.663: INFO: Logging node info for node bootstrap-e2e-minion-group-pr8s Jan 30 04:08:37.707: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-pr8s 3cd1e5e1-5c3f-4d16-a492-09b76a02380e 679 0 2023-01-30 04:04:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-pr8s kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-01-30 04:04:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 04:04:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-01-30 04:04:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 04:04:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 04:05:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-gce-ci/us-west1-b/bootstrap-e2e-minion-group-pr8s,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 04:04:37 +0000 UTC,LastTransitionTime:2023-01-30 04:04:36 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 04:04:37 +0000 UTC,LastTransitionTime:2023-01-30 04:04:36 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 04:04:37 +0000 UTC,LastTransitionTime:2023-01-30 04:04:36 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 04:04:37 +0000 UTC,LastTransitionTime:2023-01-30 04:04:36 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 04:04:37 +0000 UTC,LastTransitionTime:2023-01-30 04:04:36 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 04:04:37 +0000 UTC,LastTransitionTime:2023-01-30 04:04:36 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 04:04:37 +0000 UTC,LastTransitionTime:2023-01-30 04:04:36 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 04:04:43 +0000 UTC,LastTransitionTime:2023-01-30 04:04:43 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 04:05:04 +0000 UTC,LastTransitionTime:2023-01-30 04:04:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 04:05:04 +0000 UTC,LastTransitionTime:2023-01-30 04:04:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 04:05:04 +0000 UTC,LastTransitionTime:2023-01-30 04:04:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 04:05:04 +0000 UTC,LastTransitionTime:2023-01-30 04:04:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.173.250,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-pr8s.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-pr8s.c.k8s-jkns-e2e-kubeadm-gce-ci.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9bbcb548c0d5032d8123f6780ca06f95,SystemUUID:9bbcb548-c0d5-032d-8123-f6780ca06f95,BootID:50ecc676-c132-45d2-a77b-df7cbabfe015,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-11-g9857b5d1b,KubeletVersion:v1.27.0-alpha.1.80+97636ed7810137,KubeProxyVersion:v1.27.0-alpha.1.80+97636ed7810137,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 04:08:37.707: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-pr8s Jan 30 04:08:37.752: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-pr8s Jan 30 04:08:37.817: INFO: l7-default-backend-8549d69d99-mh466 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:37.817: INFO: Container default-http-backend ready: true, restart count 1 Jan 30 04:08:37.817: INFO: coredns-6846b5b5f-9vnqf started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:37.817: INFO: Container coredns ready: true, restart count 3 Jan 30 04:08:37.817: INFO: konnectivity-agent-wm5g7 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:37.817: INFO: Container konnectivity-agent ready: true, restart count 2 Jan 30 04:08:37.817: INFO: kube-proxy-bootstrap-e2e-minion-group-pr8s started at 2023-01-30 04:04:33 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:37.817: INFO: Container kube-proxy ready: true, restart count 1 Jan 30 04:08:37.817: INFO: metadata-proxy-v0.1-wqvwp started at 2023-01-30 04:04:34 +0000 UTC (0+2 container statuses recorded) Jan 30 04:08:37.817: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 04:08:37.817: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 04:08:37.817: INFO: kube-dns-autoscaler-5f6455f985-vcng2 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:37.817: INFO: Container autoscaler ready: false, restart count 2 Jan 30 04:08:37.817: INFO: volume-snapshot-controller-0 started at 2023-01-30 04:04:43 +0000 UTC (0+1 container statuses recorded) Jan 30 04:08:37.817: INFO: Container volume-snapshot-controller ready: true, restart count 4 Jan 30 04:08:37.998: INFO: Latency metrics for node bootstrap-e2e-minion-group-pr8s END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 04:08:37.998 (1.342s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 04:08:37.998 (1.342s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 04:08:37.998 STEP: Destroying namespace "reboot-639" for this suite. - test/e2e/framework/framework.go:347 @ 01/30/23 04:08:37.998 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 04:08:38.042 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 04:08:38.042 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 04:08:38.042 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 04:08:36.556
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 04:06:17.967 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 04:06:17.967 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 04:06:17.967 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/30/23 04:06:17.967 Jan 30 04:06:17.967: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/30/23 04:06:17.968 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/30/23 04:06:18.098 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/30/23 04:06:18.182 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 04:06:18.265 (298ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 04:06:18.265 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 04:06:18.265 (0s) > Enter [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/30/23 04:06:18.265 Jan 30 04:06:18.431: INFO: Getting bootstrap-e2e-minion-group-8989 Jan 30 04:06:18.431: INFO: Getting bootstrap-e2e-minion-group-pr8s Jan 30 04:06:18.431: INFO: Getting bootstrap-e2e-minion-group-2w7z Jan 30 04:06:18.478: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-2w7z condition Ready to be true Jan 30 04:06:18.478: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-pr8s condition Ready to be true Jan 30 04:06:18.481: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-8989 condition Ready to be true Jan 30 04:06:18.529: INFO: Node bootstrap-e2e-minion-group-pr8s has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-vcng2 kube-proxy-bootstrap-e2e-minion-group-pr8s metadata-proxy-v0.1-wqvwp volume-snapshot-controller-0] Jan 30 04:06:18.529: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-vcng2 kube-proxy-bootstrap-e2e-minion-group-pr8s metadata-proxy-v0.1-wqvwp volume-snapshot-controller-0] Jan 30 04:06:18.529: INFO: Node bootstrap-e2e-minion-group-8989 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-8989 metadata-proxy-v0.1-27bcp] Jan 30 04:06:18.529: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-8989 metadata-proxy-v0.1-27bcp] Jan 30 04:06:18.529: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:06:18.529: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-27bcp" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:06:18.530: INFO: Node bootstrap-e2e-minion-group-2w7z has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-2w7z metadata-proxy-v0.1-hhh7h] Jan 30 04:06:18.530: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-2w7z metadata-proxy-v0.1-hhh7h] Jan 30 04:06:18.530: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-hhh7h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:06:18.530: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-2w7z" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:06:18.530: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-8989" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:06:18.530: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-vcng2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:06:18.530: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-pr8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:06:18.530: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-wqvwp" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 04:06:18.590: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 60.799268ms Jan 30 04:06:18.590: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 30 04:06:18.592: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2": Phase="Running", Reason="", readiness=true. Elapsed: 61.831175ms Jan 30 04:06:18.592: INFO: Pod "kube-dns-autoscaler-5f6455f985-vcng2" satisfied condition "running and ready, or succeeded" Jan 30 04:06:18.593: INFO: Pod "metadata-proxy-v0.1-hhh7h": Phase="Running", Reason="", readiness=true. Elapsed: 63.543094ms Jan 30 04:06:18.593: INFO: Pod "metadata-proxy-v0.1-hhh7h" satisfied condition "running and ready, or succeeded" Jan 30 04:06:18.593: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-2w7z": Phase="Running", Reason="", readiness=true. Elapsed: 63.535598ms Jan 30 04:06:18.593: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-2w7z" satisfied condition "running and ready, or succeeded" Jan 30 04:06:18.593: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-2w7z metadata-proxy-v0.1-hhh7h] Jan 30 04:06:18.593: INFO: Getting external IP address for bootstrap-e2e-minion-group-2w7z Jan 30 04:06:18.593: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-2w7z(34.83.14.121:22) Jan 30 04:06:18.593: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8989": Phase="Running", Reason="", readiness=true. Elapsed: 63.587113ms Jan 30 04:06:18.593: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8989" satisfied condition "running and ready, or succeeded" Jan 30 04:06:18.593: INFO: Pod "metadata-proxy-v0.1-wqvwp": Phase="Running", Reason="", readiness=true. Elapsed: 63.594916ms Jan 30 04:06:18.593: INFO: Pod "metadata-proxy-v0.1-wqvwp" satisfied condition "running and ready, or succeeded" Jan 30 04:06:18.593: INFO: Pod "metadata-proxy-v0.1-27bcp": Phase="Running", Reason="", readiness=true. Elapsed: 63.9296ms Jan 30 04:06:18.593: INFO: Pod "metadata-proxy-v0.1-27bcp" satisfied condition "running and ready, or succeeded" Jan 30 04:06:18.593: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-8989 metadata-proxy-v0.1-27bcp] Jan 30 04:06:18.593: INFO: Getting external IP address for bootstrap-e2e-minion-group-8989 Jan 30 04:06:18.593: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-pr8s": Phase="Running", Reason="", readiness=true. Elapsed: 63.755288ms Jan 30 04:06:18.593: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-pr8s" satisfied condition "running and ready, or succeeded" Jan 30 04:06:18.594: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-vcng2 kube-proxy-bootstrap-e2e-minion-group-pr8s metadata-proxy-v0.1-wqvwp volume-snapshot-controller-0] Jan 30 04:06:18.594: INFO: Getting external IP address for bootstrap-e2e-minion-group-pr8s Jan 30 04:06:18.593: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-8989(34.145.88.234:22) Jan 30 04:06:18.594: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-pr8s(34.168.173.250:22) Jan 30 04:06:19.117: INFO: ssh prow@34.83.14.121:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 30 04:06:19.117: INFO: ssh prow@34.83.14.121:22: stdout: "" Jan 30 04:06:19.117: INFO: ssh prow@34.83.14.121:22: stderr: "" Jan 30 04:06:19.117: INFO: ssh prow@34.83.14.121:22: exit code: 0 Jan 30 04:06:19.117: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-2w7z condition Ready to be false Jan 30 04:06:19.117: INFO: ssh prow@34.145.88.234:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 30 04:06:19.117: INFO: ssh prow@34.145.88.234:22: stdout: "" Jan 30 04:06:19.117: INFO: ssh prow@34.145.88.234:22: stderr: "" Jan 30 04:06:19.117: INFO: ssh prow@34.145.88.234:22: exit code: 0 Jan 30 04:06:19.117: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-8989 condition Ready to be false Jan 30 04:06:19.121: INFO: ssh prow@34.168.173.250:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 30 04:06:19.121: INFO: ssh prow@34.168.173.250:22: stdout: "" Jan 30 04:06:19.121: INFO: ssh prow@34.168.173.250:22: stderr: "" Jan 30 04:06:19.121: INFO: ssh prow@34.168.173.250:22: exit code: 0 Jan 30 04:06:19.121: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-pr8s condition Ready to be false Jan 30 04:06:19.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:19.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:19.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:21.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:21.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:21.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:23.347: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:23.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:23.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:25.391: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:25.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:25.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:27.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:27.439: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:27.439: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:29.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:29.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:29.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:31.522: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:31.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:31.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:33.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:33.582: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:33.582: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:35.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:35.626: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:35.626: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:37.657: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:37.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:37.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:39.700: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:39.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:39.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:41.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:41.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:41.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:06:43.784: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:06:43.802: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:06:43.802: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:06:45.824: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:06:45.842: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:06:45.842: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:06:47.865: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:06:47.882: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:06:47.882: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:06:49.906: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:06:49.923: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:06:49.923: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:06:51.948: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:06:51.963: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:06:51.963: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:06:53.987: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:06:54.002: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:06:54.003: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:06:56.028: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:06:56.042: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:06:56.043: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:06:58.069: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:06:58.083: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:06:58.084: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:07:00.110: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:07:00.123: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:07:00.123: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:07:02.150: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:07:02.164: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:07:02.164: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:07:04.190: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:07:04.204: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:07:04.204: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:07:06.230: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:07:06.244: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:07:06.244: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:07:08.271: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:07:08.285: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:07:08.285: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:07:10.312: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:07:10.325: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:07:10.325: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:07:12.353: INFO: Couldn't get node bootstrap-e2e-minion-group-2w7z Jan 30 04:07:12.365: INFO: Couldn't get node bootstrap-e2e-minion-group-pr8s Jan 30 04:07:12.365: INFO: Couldn't get node bootstrap-e2e-minion-group-8989 Jan 30 04:07:20.059: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:20.059: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:20.060: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:22.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:22.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:22.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:24.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:24.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:24.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:26.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:26.318: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:26.318: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:28.364: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:28.364: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:28.365: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:30.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:30.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:30.411: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:32.458: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:32.458: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:32.459: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:34.506: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:34.506: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:34.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:36.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:36.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:36.553: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:38.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:38.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:38.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:40.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:40.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:40.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:42.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:42.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:42.700: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:44.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:44.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:44.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:46.794: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:46.795: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:46.795: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:48.837: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:48.843: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:48.843: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:50.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:50.891: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:50.891: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:52.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:52.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:52.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:54.969: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:54.986: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:54.986: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:57.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:57.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:57.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:59.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:59.082: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:07:59.082: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:01.101: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:01.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:01.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:03.144: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:03.191: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:03.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:05.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:05.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:05.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:07.233: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:07.289: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:07.289: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:09.276: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:09.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:09.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:11.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:11.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:11.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:13.364: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:13.433: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:13.433: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:15.409: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:15.486: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:15.486: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:17.454: INFO: Condition Ready of node bootstrap-e2e-minion-group-8989 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:17.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-2w7z is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:17.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-pr8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 04:08:19.455: INFO: Node bootstrap-e2e-minion-group-8989 didn't reach desired Ready condition status (false) within 2m0s Jan 30 04:08:19.535: INFO: Node bootstrap-e2e-minion-group-pr8s didn't reach desired Ready condition status (false) within 2m0s Jan 30 04:08:19.536: INFO: Node bootstrap-e2e-minion-group-2w7z didn't reach desired Ready condition status (false) within 2m0s Jan 30 04:08:19.536: INFO: Node bootstrap-e2e-minion-group-2w7z failed reboot test. Jan 30 04:08:19.536: INFO: Node bootstrap-e2e-minion-group-8989 failed reboot test. Jan 30 04:08:19.536: INFO: Node bootstrap-e2e-minion-group-pr8s failed reboot test. Jan 30 04:08:19.536: INFO: Executing termination hook on nodes Jan 30 04:08:19.536: INFO: Getting external IP address for bootstrap-e2e-minion-group-2w7z Jan 30 04:08:19.536: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-2w7z(34.83.14.121:22) Jan 30 04:08:35.493: INFO: ssh prow@34.83.14.121:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 30 04:08:35.493: INFO: ssh prow@34.83.14.121:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 04:06:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 04:08:35.493: INFO: ssh prow@34.83.14.121:22: stderr: "" Jan 30 04:08:35.493: INFO: ssh prow@34.83.14.121:22: exit code: 0 Jan 30 04:08:35.493: INFO: Getting external IP address for bootstrap-e2e-minion-group-8989 Jan 30 04:08:35.493: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-8989(34.145.88.234:22) Jan 30 04:08:36.022: INFO: ssh prow@34.145.88.234:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 30 04:08:36.022: INFO: ssh prow@34.145.88.234:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 04:06:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 04:08:36.022: INFO: ssh prow@34.145.88.234:22: stderr: "" Jan 30 04:08:36.022: INFO: ssh prow@34.145.88.234:22: exit code: 0 Jan 30 04:08:36.022: INFO: Getting external IP address for bootstrap-e2e-minion-group-pr8s Jan 30 04:08:36.022: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-pr8s(34.168.173.250:22) Jan 30 04:08:36.555: INFO: ssh prow@34.168.173.250:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 30 04:08:36.555: INFO: ssh prow@34.168.173.250:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 04:06:29 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 04:08:36.555: INFO: ssh prow@34.168.173.250:22: stderr: "" Jan 30 04:08:36.555: INFO: ssh prow@34.168.173.250:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 04:08:36.556 < Exit [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/30/23 04:08:36.556 (2m18.291s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 04:08:36.556 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/30/23 04:08:36.556 Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-9vnqf to bootstrap-e2e-minion-group-pr8s Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.407024514s (3.407035047s including waiting) Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container coredns Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container coredns Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container coredns Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-9vnqf_kube-system(81e628a9-68fb-4bf9-a0f3-07efd15135df) Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-9vnqf: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Readiness probe failed: Get "http://10.64.3.15:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-ts65r to bootstrap-e2e-minion-group-8989 Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.048544975s (1.048559529s including waiting) Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container coredns Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container coredns Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Killing: Stopping container coredns Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f-ts65r: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-9vnqf Jan 30 04:08:36.606: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-ts65r Jan 30 04:08:36.606: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 30 04:08:36.606: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 30 04:08:36.606: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 04:08:36.606: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 04:08:36.606: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 04:08:36.606: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 30 04:08:36.606: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_e368a became leader Jan 30 04:08:36.606: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_a49a became leader Jan 30 04:08:36.606: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1499f became leader Jan 30 04:08:36.606: INFO: event for konnectivity-agent-kfwd4: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-kfwd4 to bootstrap-e2e-minion-group-8989 Jan 30 04:08:36.606: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 04:08:36.606: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 626.102447ms (626.126027ms including waiting) Jan 30 04:08:36.606: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container konnectivity-agent Jan 30 04:08:36.606: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container konnectivity-agent Jan 30 04:08:36.606: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 04:08:36.606: INFO: event for konnectivity-agent-kfwd4: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rzzz6 to bootstrap-e2e-minion-group-2w7z Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 649.515062ms (649.530311ms including waiting) Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container konnectivity-agent Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container konnectivity-agent Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container konnectivity-agent Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Liveness probe failed: Get "http://10.64.1.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Unhealthy: Liveness probe failed: Get "http://10.64.1.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for konnectivity-agent-rzzz6: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-wm5g7 to bootstrap-e2e-minion-group-pr8s Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 933.59456ms (933.605653ms including waiting) Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container konnectivity-agent Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container konnectivity-agent Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container konnectivity-agent Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Failed: Error: failed to get sandbox container task: no running task found: task f5fb933e314e02e8c688680c6515433f89f38b11e6128a51e48c4bb125c4e747 not found: not found Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Liveness probe failed: Get "http://10.64.3.17:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for konnectivity-agent-wm5g7: {kubelet bootstrap-e2e-minion-group-pr8s} Unhealthy: Liveness probe failed: Get "http://10.64.3.17:8093/healthz": dial tcp 10.64.3.17:8093: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 30 04:08:36.606: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-wm5g7 Jan 30 04:08:36.606: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rzzz6 Jan 30 04:08:36.606: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-kfwd4 Jan 30 04:08:36.606: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 30 04:08:36.606: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 30 04:08:36.606: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 30 04:08:36.606: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:08:36.606: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 30 04:08:36.606: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 30 04:08:36.606: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(d0b483a2668f277999bcc23ee75fc99e) Jan 30 04:08:36.606: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c6c2a8ca-a36c-403f-9999-a2b000b3920e became leader Jan 30 04:08:36.606: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_f291cda0-4aa2-4a2c-b2d0-0571517f319b became leader Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-vcng2 to bootstrap-e2e-minion-group-pr8s Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.072385291s (3.072406123s including waiting) Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container autoscaler Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container autoscaler Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container autoscaler Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985-vcng2: {kubelet bootstrap-e2e-minion-group-pr8s} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-vcng2_kube-system(5881f6ae-7dab-414e-bcbe-bad1b6578adb) Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-vcng2 Jan 30 04:08:36.606: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Created: Created container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Started: Started container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} Killing: Stopping container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-2w7z: {kubelet bootstrap-e2e-minion-group-2w7z} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-2w7z_kube-system(de89eacf2d0b5006d7508757b58cec1d) Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Created: Created container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Started: Started container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} Killing: Stopping container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8989: {kubelet bootstrap-e2e-minion-group-8989} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-8989_kube-system(7391456f443d7cab197930929fc65610) Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Created: Created container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Started: Started container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} Killing: Stopping container kube-proxy Jan 30 04:08:36.606: INFO: event for kube-proxy-bootstrap-e2e-minion-group-pr8s: {kubelet bootstrap-e2e-minion-group-pr8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 04:08:36.606: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.80_97636ed7810137" already present on machine Jan 30 04:08:36.606: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 30 04:08:36.606: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 30 04:08:36.606: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_832b3d7b-7090-4716-933e-249d446b7700 became leader Jan 30 04:08:36.606: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c8af7ead-b18f-4a96-ac6e-6319fcf78599 became leader Jan 30 04:08:36.606: INFO: event for l7-default-backend-8549d69d99-mh466: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 04:08:36.606: INFO: event for l7-default-backend-8549d69d99-mh466: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 30 04:08:36.606: INFO: event for l7-default-backend-8549d69d99-mh466: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-mh466 to bootstrap-e2e-minion-group-pr8s Jan 30 04:08:36.606: INFO: event for l7-default-backend-8549d69d99-mh466: {kubelet bootstrap-e2e-minion-group-pr8s} Pulling: Pulling i