go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 22:15:21.31from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 22:12:05.279 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 22:12:05.279 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 22:12:05.279 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 22:12:05.279 Jan 28 22:12:05.279: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 22:12:05.28 Jan 28 22:12:05.320: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:07.360: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:09.363: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:11.359: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:13.361: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:15.360: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:17.360: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:19.359: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:21.359: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 22:13:02.309 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 22:13:02.452 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 22:13:02.548 (57.269s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 22:13:02.548 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 22:13:02.548 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/28/23 22:13:02.548 Jan 28 22:13:02.747: INFO: Getting bootstrap-e2e-minion-group-gw8s Jan 28 22:13:02.747: INFO: Getting bootstrap-e2e-minion-group-jdvv Jan 28 22:13:02.747: INFO: Getting bootstrap-e2e-minion-group-rndd Jan 28 22:13:02.795: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-rndd condition Ready to be true Jan 28 22:13:02.796: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-jdvv condition Ready to be true Jan 28 22:13:02.796: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-gw8s condition Ready to be true Jan 28 22:13:02.841: INFO: Node bootstrap-e2e-minion-group-jdvv has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-rtgpq kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0] Jan 28 22:13:02.841: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-rtgpq kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0] Jan 28 22:13:02.841: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:13:02.841: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:13:02.841: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xp6b5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:13:02.841: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-rtgpq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:13:02.842: INFO: Node bootstrap-e2e-minion-group-rndd has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:13:02.842: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:13:02.842: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8gbc7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:13:02.842: INFO: Node bootstrap-e2e-minion-group-gw8s has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:13:02.842: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:13:02.842: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xkczn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:13:02.842: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:13:02.842: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-rndd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:13:02.886: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=true. Elapsed: 45.78659ms Jan 28 22:13:02.886: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" satisfied condition "running and ready, or succeeded" Jan 28 22:13:02.889: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.851915ms Jan 28 22:13:02.889: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 48.679264ms Jan 28 22:13:02.889: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:02.889: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:02.891: INFO: Pod "metadata-proxy-v0.1-xp6b5": Phase="Running", Reason="", readiness=true. Elapsed: 50.274018ms Jan 28 22:13:02.891: INFO: Pod "metadata-proxy-v0.1-xp6b5" satisfied condition "running and ready, or succeeded" Jan 28 22:13:02.895: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s": Phase="Running", Reason="", readiness=true. Elapsed: 52.191648ms Jan 28 22:13:02.895: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" satisfied condition "running and ready, or succeeded" Jan 28 22:13:02.895: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=true. Elapsed: 52.412346ms Jan 28 22:13:02.895: INFO: Pod "metadata-proxy-v0.1-8gbc7" satisfied condition "running and ready, or succeeded" Jan 28 22:13:02.895: INFO: Pod "metadata-proxy-v0.1-xkczn": Phase="Running", Reason="", readiness=true. Elapsed: 52.421444ms Jan 28 22:13:02.895: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=true. Elapsed: 52.26248ms Jan 28 22:13:02.895: INFO: Pod "metadata-proxy-v0.1-xkczn" satisfied condition "running and ready, or succeeded" Jan 28 22:13:02.895: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd" satisfied condition "running and ready, or succeeded" Jan 28 22:13:02.895: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:13:02.895: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:13:02.895: INFO: Getting external IP address for bootstrap-e2e-minion-group-rndd Jan 28 22:13:02.895: INFO: Getting external IP address for bootstrap-e2e-minion-group-gw8s Jan 28 22:13:02.895: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-rndd(34.145.37.78:22) Jan 28 22:13:02.895: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-gw8s(34.105.20.128:22) Jan 28 22:13:03.427: INFO: ssh prow@34.145.37.78:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 22:13:03.427: INFO: ssh prow@34.145.37.78:22: stdout: "" Jan 28 22:13:03.427: INFO: ssh prow@34.145.37.78:22: stderr: "" Jan 28 22:13:03.427: INFO: ssh prow@34.145.37.78:22: exit code: 0 Jan 28 22:13:03.427: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-rndd condition Ready to be false Jan 28 22:13:03.448: INFO: ssh prow@34.105.20.128:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 22:13:03.448: INFO: ssh prow@34.105.20.128:22: stdout: "" Jan 28 22:13:03.448: INFO: ssh prow@34.105.20.128:22: stderr: "" Jan 28 22:13:03.448: INFO: ssh prow@34.105.20.128:22: exit code: 0 Jan 28 22:13:03.448: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-gw8s condition Ready to be false Jan 28 22:13:03.469: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:03.490: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:04.936: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.095196731s Jan 28 22:13:04.936: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:04.936: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095166732s Jan 28 22:13:04.936: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:05.512: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:05.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:06.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.091444451s Jan 28 22:13:06.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:06.933: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092487489s Jan 28 22:13:06.933: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:07.555: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:07.576: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:08.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.091893063s Jan 28 22:13:08.933: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:08.934: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09312664s Jan 28 22:13:08.934: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:09.597: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:09.619: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:10.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.091738785s Jan 28 22:13:10.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:10.934: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092888525s Jan 28 22:13:10.934: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:11.640: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:11.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:12.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.091805843s Jan 28 22:13:12.932: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.091640087s Jan 28 22:13:12.932: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:12.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:13.683: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:13.706: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:14.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.091442806s Jan 28 22:13:14.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:14.933: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.092548156s Jan 28 22:13:14.933: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:15.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:15.749: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:16.933: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.092369525s Jan 28 22:13:16.933: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:16.934: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.093467432s Jan 28 22:13:16.934: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:17.768: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:17.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:18.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.091847053s Jan 28 22:13:18.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:18.934: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 16.092925994s Jan 28 22:13:18.934: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:19.811: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:19.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:20.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.091898856s Jan 28 22:13:20.933: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:20.934: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 18.092952432s Jan 28 22:13:20.934: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:21.854: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:21.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:22.933: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 20.092019114s Jan 28 22:13:22.933: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:22.933: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.092270253s Jan 28 22:13:22.933: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:23.896: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:23.920: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:24.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.091591658s Jan 28 22:13:24.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:24.933: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 22.0926972s Jan 28 22:13:24.933: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:25.939: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:25.962: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:26.933: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.092229608s Jan 28 22:13:26.933: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:26.934: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 24.093204075s Jan 28 22:13:26.934: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:27.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:28.005: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:28.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.091463526s Jan 28 22:13:28.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:28.933: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 26.092618881s Jan 28 22:13:28.933: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:30.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:30.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:30.932: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 28.091309792s Jan 28 22:13:30.932: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:30.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.091603993s Jan 28 22:13:30.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:32.134: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:32.136: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:32.932: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 30.091481114s Jan 28 22:13:32.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.091665476s Jan 28 22:13:32.932: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:32.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:34.180: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:34.180: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:34.930: INFO: Encountered non-retryable error while getting pod kube-system/volume-snapshot-controller-0: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:13:34.930: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 28 22:13:34.930: INFO: Encountered non-retryable error while getting pod kube-system/kube-dns-autoscaler-5f6455f985-rtgpq: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-rtgpq": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:13:34.930: INFO: Pod kube-dns-autoscaler-5f6455f985-rtgpq failed to be running and ready, or succeeded. Jan 28 22:13:34.930: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-dns-autoscaler-5f6455f985-rtgpq kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0] Jan 28 22:13:34.930: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-rtgpq: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:05:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:05:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:38 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP: PodIPs:[] StartTime:2023-01-28 21:53:38 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:&ContainerStateWaiting{Reason:,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:3 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://327aa9b55c426f26dbce218ae381d10dc0d1de28e736fd47f30215df0e91d6b7 Started:0xc004b4710a}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 22:13:34.970: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-rtgpq/autoscaler, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-rtgpq/log?container=autoscaler&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 22:13:34.970: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-rtgpq/autoscaler, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-rtgpq/log?container=autoscaler&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 22:13:34.970: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:12:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:12:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:38 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP:10.64.3.36 PodIPs:[{IP:10.64.3.36}] StartTime:2023-01-28 21:53:38 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 2m40s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(699caeb5-2b49-4d25-998b-e11af5bff8c6),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 22:12:05 +0000 UTC,FinishedAt:2023-01-28 22:12:13 +0000 UTC,ContainerID:containerd://b8bee3deb5864048b1b587ef59dfb1c4aede245df9bf2e280cfb87b1c723e79f,}} Ready:false RestartCount:11 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://b8bee3deb5864048b1b587ef59dfb1c4aede245df9bf2e280cfb87b1c723e79f Started:0xc004b47b0f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 28 22:13:35.009: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 22:13:35.009: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 22:13:36.220: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:36.220: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:38.260: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:38.260: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:40.301: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:40.301: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:42.340: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:42.340: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:44.380: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:44.380: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:46.421: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:46.421: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:48.461: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:48.461: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:50.502: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:50.502: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:52.541: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:52.541: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:54.581: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:54.581: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:56.621: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:56.621: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:58.661: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:58.661: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:00.702: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:00.702: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:02.742: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:02.742: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:04.782: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:04.782: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:06.822: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:06.822: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:08.862: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:08.862: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:10.902: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:10.902: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:12.942: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:12.942: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:14.982: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:14.982: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:17.022: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:17.022: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:19.063: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:19.063: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:21.102: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:21.102: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:23.142: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:23.142: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:25.183: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:25.183: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:27.223: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:27.223: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:29.263: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:29.266: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:31.303: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:31.306: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:33.344: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:33.348: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:35.388: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:35.391: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:37.428: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:37.431: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:39.467: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:39.471: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:41.508: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:41.510: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:43.549: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:43.550: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:45.589: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:45.589: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:52.067: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:14:52.067: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:14:54.125: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:14:54.127: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:14:56.172: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:14:56.172: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:14:58.216: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:14:58.216: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:15:00.261: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:15:00.261: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:15:02.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:15:02.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:15:04.308: INFO: Node bootstrap-e2e-minion-group-gw8s didn't reach desired Ready condition status (false) within 2m0s Jan 28 22:15:04.308: INFO: Node bootstrap-e2e-minion-group-rndd didn't reach desired Ready condition status (false) within 2m0s Jan 28 22:15:04.308: INFO: Node bootstrap-e2e-minion-group-gw8s failed reboot test. Jan 28 22:15:04.308: INFO: Node bootstrap-e2e-minion-group-jdvv failed reboot test. Jan 28 22:15:04.308: INFO: Node bootstrap-e2e-minion-group-rndd failed reboot test. Jan 28 22:15:04.308: INFO: Executing termination hook on nodes Jan 28 22:15:04.308: INFO: Getting external IP address for bootstrap-e2e-minion-group-gw8s Jan 28 22:15:04.308: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-gw8s(34.105.20.128:22) Jan 28 22:15:20.250: INFO: ssh prow@34.105.20.128:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 22:15:20.250: INFO: ssh prow@34.105.20.128:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 22:13:13 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 22:15:20.250: INFO: ssh prow@34.105.20.128:22: stderr: "" Jan 28 22:15:20.250: INFO: ssh prow@34.105.20.128:22: exit code: 0 Jan 28 22:15:20.250: INFO: Getting external IP address for bootstrap-e2e-minion-group-jdvv Jan 28 22:15:20.250: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-jdvv(34.127.24.56:22) Jan 28 22:15:20.795: INFO: ssh prow@34.127.24.56:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 22:15:20.795: INFO: ssh prow@34.127.24.56:22: stdout: "" Jan 28 22:15:20.795: INFO: ssh prow@34.127.24.56:22: stderr: "cat: /tmp/drop-inbound.log: No such file or directory\n" Jan 28 22:15:20.795: INFO: ssh prow@34.127.24.56:22: exit code: 1 Jan 28 22:15:20.795: INFO: Error while issuing ssh command: failed running "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log": <nil> (exit code 1, stderr cat: /tmp/drop-inbound.log: No such file or directory ) Jan 28 22:15:20.795: INFO: Getting external IP address for bootstrap-e2e-minion-group-rndd Jan 28 22:15:20.795: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-rndd(34.145.37.78:22) Jan 28 22:15:21.310: INFO: ssh prow@34.145.37.78:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 22:15:21.310: INFO: ssh prow@34.145.37.78:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 22:13:13 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 22:15:21.310: INFO: ssh prow@34.145.37.78:22: stderr: "" Jan 28 22:15:21.310: INFO: ssh prow@34.145.37.78:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 22:15:21.31 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/28/23 22:15:21.31 (2m18.762s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:15:21.31 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 22:15:21.31 Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-77sdd to bootstrap-e2e-minion-group-gw8s Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.010007141s (1.010017589s including waiting) Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-77sdd Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-77sdd Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Unhealthy: Readiness probe failed: Get "http://10.64.2.9:8181/ready": dial tcp 10.64.2.9:8181: connect: connection refused Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-77sdd_kube-system(db0c09f1-c4d8-4e56-ab71-b0803b234d20) Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-77sdd Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Unhealthy: Readiness probe failed: Get "http://10.64.2.14:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Unhealthy: Liveness probe failed: Get "http://10.64.2.14:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Container coredns failed liveness probe, will be restarted Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Failed: Error: failed to get sandbox container task: no running task found: task 0b75b4d5d974b9f432b7e10e9d71af104dc8c2ddc0133e5a5cc1e268788ff5fc not found: not found Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-77sdd_kube-system(db0c09f1-c4d8-4e56-ab71-b0803b234d20) Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-8xrbf to bootstrap-e2e-minion-group-jdvv Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.107628334s (2.107641232s including waiting) Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Readiness probe failed: Get "http://10.64.3.15:8181/ready": dial tcp 10.64.3.15:8181: connect: connection refused Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-8xrbf_kube-system(f16a4d9b-c0c6-4f1c-94d6-b9a2f091b21e) Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Readiness probe failed: Get "http://10.64.3.20:8181/ready": dial tcp 10.64.3.20:8181: connect: connection refused Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Readiness probe failed: Get "http://10.64.3.28:8181/ready": dial tcp 10.64.3.28:8181: connect: connection refused Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-8xrbf_kube-system(f16a4d9b-c0c6-4f1c-94d6-b9a2f091b21e) Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Readiness probe failed: Get "http://10.64.3.31:8181/ready": dial tcp 10.64.3.31:8181: connect: connection refused Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-8xrbf Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-77sdd Jan 28 22:15:21.403: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 22:15:21.403: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 22:15:21.403: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 22:15:21.403: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 22:15:21.403: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 22:15:21.403: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 28 22:15:21.403: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 28 22:15:21.403: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 28 22:15:21.403: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 22:15:21.403: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 22:15:21.403: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 28 22:15:21.403: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b3a39 became leader Jan 28 22:15:21.403: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_5712c became leader Jan 28 22:15:21.403: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_da42f became leader Jan 28 22:15:21.403: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ac498 became leader Jan 28 22:15:21.403: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3983d became leader Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-npfvc to bootstrap-e2e-minion-group-gw8s Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 620.414125ms (620.448513ms including waiting) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-npfvc_kube-system(cd16d88d-4ef4-4c9a-96df-86fb4c70ef13) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-npfvc_kube-system(cd16d88d-4ef4-4c9a-96df-86fb4c70ef13) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-t5bmd to bootstrap-e2e-minion-group-jdvv Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.384242476s (1.38425164s including waiting) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-t5bmd_kube-system(07681149-8b9c-4c0d-bb8b-75eaf2c0c570) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-t5bmd_kube-system(07681149-8b9c-4c0d-bb8b-75eaf2c0c570) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Liveness probe failed: Get "http://10.64.3.30:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-twq5s to bootstrap-e2e-minion-group-rndd Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 602.431484ms (602.449236ms including waiting) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Killing: Stopping container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-twq5s_kube-system(de9ecb8f-d586-41fd-a04d-41f45f7ea0bf) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-t5bmd Jan 28 22:15:21.403: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-twq5s Jan 28 22:15:21.403: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-npfvc Jan 28 22:15:21.403: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 22:15:21.403: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 22:15:21.403: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 28 22:15:21.403: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 28 22:15:21.403: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 28 22:15:21.403: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 28 22:15:21.403: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 28 22:15:21.403: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 28 22:15:21.403: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 28 22:15:21.403: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 28 22:15:21.403: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 22:15:21.403: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 22:15:21.403: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 28 22:15:21.403: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 22:15:21.403: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 28 22:15:21.403: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 28 22:15:21.403: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 28 22:15:21.403: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 28 22:15:21.403: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_06832965-97e5-41e3-bec8-383d2f8deac1 became leader Jan 28 22:15:21.403: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_58fabdfe-fbc6-41f6-b7ec-99d45b3aed32 became leader Jan 28 22:15:21.403: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ade39950-f49a-4ad2-9cde-b63a206669f4 became leader Jan 28 22:15:21.403: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_d447a835-496d-4a2e-83eb-b73c743e1937 became leader Jan 28 22:15:21.403: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_d16b72c0-55fe-496a-92f4-0ae27f545a10 became leader Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-rtgpq to bootstrap-e2e-minion-group-jdvv Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.593488011s (1.5934971s including waiting) Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container autoscaler Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container autoscaler Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container autoscaler Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-rtgpq_kube-system(5025b848-8fb4-4098-be54-5a4a0668cef3) Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-rtgpq Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-gw8s_kube-system(08a4c2012f0274262c3fc9dc7f0563a7) Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-gw8s_kube-system(08a4c2012f0274262c3fc9dc7f0563a7) Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-jdvv_kube-system(e126030fe08b481bd93bca8e2433b514) Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-jdvv_kube-system(e126030fe08b481bd93bca8e2433b514) Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Killing: Stopping container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-rndd_kube-system(d885339590022f41c030a27cba4cc12d) Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Killing: Stopping container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Killing: Stopping container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-rndd_kube-system(d885339590022f41c030a27cba4cc12d) Jan 28 22:15:21.403: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 28 22:15:21.403: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 28 22:15:21.403: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_dbab41bb-932f-4b7b-a4b6-50c01fb48deb became leader Jan 28 22:15:21.403: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_fd78c921-38c9-42ac-b405-1d075f95bdbf became leader Jan 28 22:15:21.403: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_2cf81737-e3b5-4d96-ad71-51da341fe4b2 became leader Jan 28 22:15:21.403: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_a84201d0-0d8a-4734-b5b0-ec434b4709ca became leader Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-stdz9 to bootstrap-e2e-minion-group-jdvv Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 717.987767ms (717.996689ms including waiting) Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container default-http-backend Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container default-http-backend Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container default-http-backend Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container default-http-backend Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-stdz9 Jan 28 22:15:21.403: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 28 22:15:21.403: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 28 22:15:21.403: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 28 22:15:21.403: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 28 22:15:21.403: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 28 22:15:21.403: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 28 22:15:21.403: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://10.138.0.2:8086/healthz": dial tcp 10.138.0.2:8086: connect: connection refused Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-8gbc7 to bootstrap-e2e-minion-group-rndd Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 732.758665ms (732.778804ms including waiting) Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.847867171s (1.847875747s including waiting) Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-k6tg5 to bootstrap-e2e-master Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 726.879318ms (726.885896ms including waiting) Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.308701037s (2.308707341s including waiting) Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-xkczn to bootstrap-e2e-minion-group-gw8s Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 739.300498ms (739.324026ms including waiting) Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.989961468s (1.98997004s including waiting) Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-xp6b5 to bootstrap-e2e-minion-group-jdvv Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 766.016998ms (766.038857ms including waiting) Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.049975938s (2.049992955s including waiting) Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-k6tg5 Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-8gbc7 Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-xkczn Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-xp6b5 Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-bxdnp to bootstrap-e2e-minion-group-jdvv Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.40352739s (2.403537151s including waiting) Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container metrics-server Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container metrics-server Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.980585574s (1.980593738s including waiting) Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container metrics-server-nanny Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container metrics-server-nanny Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container metrics-server Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container metrics-server-nanny Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-6764bf875c-bxdnp_kube-system(87cfd581-c05b-43b9-84ca-d52a66620447) Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-6764bf875c-bxdnp_kube-system(87cfd581-c05b-43b9-84ca-d52a66620447) Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-bxdnp Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-bxdnp Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-hqtr6 to bootstrap-e2e-minion-group-rndd Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.306501676s (1.306517306s including waiting) Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metrics-server Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metrics-server Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 973.374731ms (973.390393ms including waiting) Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metrics-server-nanny Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metrics-server-nanny Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Killing: Stopping container metrics-server Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Killing: Stopping container metrics-server-nanny Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Killing: Container metrics-server failed liveness probe, will be restarted Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Failed: Error: failed to get sandbox container task: no running task found: task 71079ec71b556a610f487b9fe75651f31afafb33c20ca14a5f922d3ed9aa5de2 not found: not found Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-hqtr6 Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metrics-server Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metrics-server Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metrics-server-nanny Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metrics-server-nanny Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Readiness probe failed: Get "https://10.64.1.12:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Readiness probe failed: Get "https://10.64.1.12:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Liveness probe failed: Get "https://10.64.1.12:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metrics-server Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metrics-server Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metrics-server-nanny Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metrics-server-nanny Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Readiness probe failed: Get "https://10.64.1.14:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Liveness probe failed: Get "https://10.64.1.14:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-hqtr6 Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-jdvv Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.411688467s (2.411696734s including waiting) Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container volume-snapshot-controller Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container volume-snapshot-controller Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container volume-snapshot-controller Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(699caeb5-2b49-4d25-998b-e11af5bff8c6) Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container volume-snapshot-controller Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container volume-snapshot-controller Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container volume-snapshot-controller Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(699caeb5-2b49-4d25-998b-e11af5bff8c6) Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:15:21.404 (94ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 22:15:21.404 Jan 28 22:15:21.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 22:15:21.45 (47ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 22:15:21.45 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 22:15:21.45 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 22:15:21.45 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 22:15:21.451 STEP: Collecting events from namespace "reboot-6388". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 22:15:21.451 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/28/23 22:15:21.491 Jan 28 22:15:21.533: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 22:15:21.533: INFO: Jan 28 22:15:21.579: INFO: Logging node info for node bootstrap-e2e-master Jan 28 22:15:21.630: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 8b296ac8-60af-491a-9f1c-cf2d7db0caac 3090 0 2023-01-28 21:53:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 21:53:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-28 21:53:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-28 21:53:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 22:14:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-protobuf/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 21:53:38 +0000 UTC,LastTransitionTime:2023-01-28 21:53:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 22:14:52 +0000 UTC,LastTransitionTime:2023-01-28 21:53:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 22:14:52 +0000 UTC,LastTransitionTime:2023-01-28 21:53:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 22:14:52 +0000 UTC,LastTransitionTime:2023-01-28 21:53:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 22:14:52 +0000 UTC,LastTransitionTime:2023-01-28 21:53:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.109.193,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:07ffb543bedc149534b2440709dac943,SystemUUID:07ffb543-bedc-1495-34b2-440709dac943,BootID:c11957ee-9d65-4212-80d4-e60012976419,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 22:15:21.630: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 28 22:15:21.676: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 28 22:15:21.741: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-28 21:52:31 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:21.741: INFO: Container konnectivity-server-container ready: true, restart count 4 Jan 28 22:15:21.741: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-28 21:52:31 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:21.741: INFO: Container kube-apiserver ready: true, restart count 3 Jan 28 22:15:21.741: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-28 21:52:50 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:21.741: INFO: Container kube-addon-manager ready: true, restart count 4 Jan 28 22:15:21.741: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-28 21:52:50 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:21.741: INFO: Container l7-lb-controller ready: true, restart count 7 Jan 28 22:15:21.741: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-28 21:52:31 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:21.741: INFO: Container etcd-container ready: true, restart count 1 Jan 28 22:15:21.741: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-28 21:52:31 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:21.741: INFO: Container etcd-container ready: true, restart count 2 Jan 28 22:15:21.741: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-28 21:52:31 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:21.741: INFO: Container kube-controller-manager ready: false, restart count 6 Jan 28 22:15:21.741: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-28 21:52:31 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:21.741: INFO: Container kube-scheduler ready: true, restart count 4 Jan 28 22:15:21.741: INFO: metadata-proxy-v0.1-k6tg5 started at 2023-01-28 21:53:17 +0000 UTC (0+2 container statuses recorded) Jan 28 22:15:21.741: INFO: Container metadata-proxy ready: true, restart count 0 Jan 28 22:15:21.741: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 28 22:15:21.922: INFO: Latency metrics for node bootstrap-e2e-master Jan 28 22:15:21.922: INFO: Logging node info for node bootstrap-e2e-minion-group-gw8s Jan 28 22:15:21.964: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-gw8s 8baa011e-ebc5-4ddb-b17f-5306514d9570 3051 0 2023-01-28 21:53:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-gw8s kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 21:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 22:07:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 22:08:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-28 22:13:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-28 22:13:10 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-protobuf/us-west1-b/bootstrap-e2e-minion-group-gw8s,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 22:13:05 +0000 UTC,LastTransitionTime:2023-01-28 22:08:03 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 22:13:05 +0000 UTC,LastTransitionTime:2023-01-28 22:08:03 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 22:13:05 +0000 UTC,LastTransitionTime:2023-01-28 22:08:03 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 22:13:05 +0000 UTC,LastTransitionTime:2023-01-28 22:08:03 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 22:13:05 +0000 UTC,LastTransitionTime:2023-01-28 22:08:03 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 22:13:05 +0000 UTC,LastTransitionTime:2023-01-28 22:08:03 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 22:13:05 +0000 UTC,LastTransitionTime:2023-01-28 22:08:03 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 21:53:38 +0000 UTC,LastTransitionTime:2023-01-28 21:53:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 22:13:10 +0000 UTC,LastTransitionTime:2023-01-28 22:08:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 22:13:10 +0000 UTC,LastTransitionTime:2023-01-28 22:08:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 22:13:10 +0000 UTC,LastTransitionTime:2023-01-28 22:08:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 22:13:10 +0000 UTC,LastTransitionTime:2023-01-28 22:08:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.105.20.128,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-gw8s.c.k8s-jkns-e2e-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-gw8s.c.k8s-jkns-e2e-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a1fe5d1cff5799c1dfa37127e8163c9f,SystemUUID:a1fe5d1c-ff57-99c1-dfa3-7127e8163c9f,BootID:e9a93669-a286-41e2-ada3-f1d44aab47fb,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 22:15:21.965: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-gw8s Jan 28 22:15:22.011: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-gw8s Jan 28 22:15:22.119: INFO: kube-proxy-bootstrap-e2e-minion-group-gw8s started at 2023-01-28 21:53:21 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.119: INFO: Container kube-proxy ready: true, restart count 6 Jan 28 22:15:22.119: INFO: metadata-proxy-v0.1-xkczn started at 2023-01-28 21:53:22 +0000 UTC (0+2 container statuses recorded) Jan 28 22:15:22.119: INFO: Container metadata-proxy ready: true, restart count 2 Jan 28 22:15:22.119: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 28 22:15:22.119: INFO: konnectivity-agent-npfvc started at 2023-01-28 21:53:38 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.119: INFO: Container konnectivity-agent ready: false, restart count 6 Jan 28 22:15:22.119: INFO: coredns-6846b5b5f-77sdd started at 2023-01-28 21:53:44 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.119: INFO: Container coredns ready: true, restart count 5 Jan 28 22:15:22.286: INFO: Latency metrics for node bootstrap-e2e-minion-group-gw8s Jan 28 22:15:22.286: INFO: Logging node info for node bootstrap-e2e-minion-group-jdvv Jan 28 22:15:22.330: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-jdvv b8bb8d18-5ed7-4907-be37-ee1dfaa09d07 3106 0 2023-01-28 21:53:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-jdvv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 21:53:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 21:57:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 22:05:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 22:10:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 22:14:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-protobuf/us-west1-b/bootstrap-e2e-minion-group-jdvv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 22:14:58 +0000 UTC,LastTransitionTime:2023-01-28 22:04:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 22:14:58 +0000 UTC,LastTransitionTime:2023-01-28 22:04:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 22:14:58 +0000 UTC,LastTransitionTime:2023-01-28 22:04:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 22:14:58 +0000 UTC,LastTransitionTime:2023-01-28 22:04:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 22:14:58 +0000 UTC,LastTransitionTime:2023-01-28 22:04:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 22:14:58 +0000 UTC,LastTransitionTime:2023-01-28 22:04:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 22:14:58 +0000 UTC,LastTransitionTime:2023-01-28 22:04:24 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 21:53:38 +0000 UTC,LastTransitionTime:2023-01-28 21:53:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 22:10:36 +0000 UTC,LastTransitionTime:2023-01-28 21:58:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 22:10:36 +0000 UTC,LastTransitionTime:2023-01-28 21:58:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 22:10:36 +0000 UTC,LastTransitionTime:2023-01-28 21:58:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 22:10:36 +0000 UTC,LastTransitionTime:2023-01-28 22:05:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.127.24.56,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-jdvv.c.k8s-jkns-e2e-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-jdvv.c.k8s-jkns-e2e-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:513e46be1ccc0a1da13599e75884edf8,SystemUUID:513e46be-1ccc-0a1d-a135-99e75884edf8,BootID:10d1a8a6-5c8f-4588-900d-6a51d3348d1a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 22:15:22.330: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-jdvv Jan 28 22:15:22.376: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-jdvv Jan 28 22:15:22.453: INFO: kube-proxy-bootstrap-e2e-minion-group-jdvv started at 2023-01-28 21:53:22 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.453: INFO: Container kube-proxy ready: true, restart count 8 Jan 28 22:15:22.453: INFO: l7-default-backend-8549d69d99-stdz9 started at 2023-01-28 21:53:38 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.453: INFO: Container default-http-backend ready: true, restart count 1 Jan 28 22:15:22.453: INFO: volume-snapshot-controller-0 started at 2023-01-28 21:53:38 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.453: INFO: Container volume-snapshot-controller ready: false, restart count 11 Jan 28 22:15:22.453: INFO: kube-dns-autoscaler-5f6455f985-rtgpq started at 2023-01-28 21:53:38 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.453: INFO: Container autoscaler ready: false, restart count 3 Jan 28 22:15:22.453: INFO: coredns-6846b5b5f-8xrbf started at 2023-01-28 21:53:38 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.453: INFO: Container coredns ready: true, restart count 6 Jan 28 22:15:22.453: INFO: metadata-proxy-v0.1-xp6b5 started at 2023-01-28 21:53:23 +0000 UTC (0+2 container statuses recorded) Jan 28 22:15:22.453: INFO: Container metadata-proxy ready: true, restart count 1 Jan 28 22:15:22.453: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 28 22:15:22.453: INFO: konnectivity-agent-t5bmd started at 2023-01-28 21:53:38 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.453: INFO: Container konnectivity-agent ready: false, restart count 7 Jan 28 22:15:22.628: INFO: Latency metrics for node bootstrap-e2e-minion-group-jdvv Jan 28 22:15:22.628: INFO: Logging node info for node bootstrap-e2e-minion-group-rndd Jan 28 22:15:22.671: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-rndd 41be9bbb-23a1-4f74-99dc-2ef115465238 3054 0 2023-01-28 21:53:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-rndd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 21:53:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 22:07:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 22:08:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-28 22:13:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-28 22:13:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-protobuf/us-west1-b/bootstrap-e2e-minion-group-rndd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 22:13:07 +0000 UTC,LastTransitionTime:2023-01-28 22:08:05 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 22:13:07 +0000 UTC,LastTransitionTime:2023-01-28 22:08:05 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 22:13:07 +0000 UTC,LastTransitionTime:2023-01-28 22:08:05 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 22:13:07 +0000 UTC,LastTransitionTime:2023-01-28 22:08:05 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 22:13:07 +0000 UTC,LastTransitionTime:2023-01-28 22:08:05 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 22:13:07 +0000 UTC,LastTransitionTime:2023-01-28 22:08:05 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 22:13:07 +0000 UTC,LastTransitionTime:2023-01-28 22:08:05 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 21:53:38 +0000 UTC,LastTransitionTime:2023-01-28 21:53:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 22:13:12 +0000 UTC,LastTransitionTime:2023-01-28 22:08:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 22:13:12 +0000 UTC,LastTransitionTime:2023-01-28 22:08:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 22:13:12 +0000 UTC,LastTransitionTime:2023-01-28 22:08:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 22:13:12 +0000 UTC,LastTransitionTime:2023-01-28 22:08:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.145.37.78,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-rndd.c.k8s-jkns-e2e-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-rndd.c.k8s-jkns-e2e-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6859cc7a546c71de83c075bc57ce869e,SystemUUID:6859cc7a-546c-71de-83c0-75bc57ce869e,BootID:e868ec98-cb99-4706-a359-8636a9af2027,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 22:15:22.671: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-rndd Jan 28 22:15:22.717: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-rndd Jan 28 22:15:22.795: INFO: kube-proxy-bootstrap-e2e-minion-group-rndd started at 2023-01-28 21:53:20 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.795: INFO: Container kube-proxy ready: true, restart count 10 Jan 28 22:15:22.795: INFO: metadata-proxy-v0.1-8gbc7 started at 2023-01-28 21:53:21 +0000 UTC (0+2 container statuses recorded) Jan 28 22:15:22.795: INFO: Container metadata-proxy ready: true, restart count 2 Jan 28 22:15:22.795: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 28 22:15:22.795: INFO: konnectivity-agent-twq5s started at 2023-01-28 21:53:38 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.795: INFO: Container konnectivity-agent ready: true, restart count 7 Jan 28 22:15:22.795: INFO: metrics-server-v0.5.2-867b8754b9-hqtr6 started at 2023-01-28 21:54:18 +0000 UTC (0+2 container statuses recorded) Jan 28 22:15:22.795: INFO: Container metrics-server ready: false, restart count 9 Jan 28 22:15:22.795: INFO: Container metrics-server-nanny ready: false, restart count 9 Jan 28 22:15:22.961: INFO: Latency metrics for node bootstrap-e2e-minion-group-rndd END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 22:15:22.961 (1.51s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 22:15:22.961 (1.511s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 22:15:22.961 STEP: Destroying namespace "reboot-6388" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 22:15:22.961 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 22:15:23.005 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 22:15:23.006 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 22:15:23.006 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 22:15:21.31from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 22:12:05.279 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 22:12:05.279 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 22:12:05.279 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 22:12:05.279 Jan 28 22:12:05.279: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 22:12:05.28 Jan 28 22:12:05.320: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:07.360: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:09.363: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:11.359: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:13.361: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:15.360: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:17.360: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:19.359: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:21.359: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 22:13:02.309 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 22:13:02.452 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 22:13:02.548 (57.269s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 22:13:02.548 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 22:13:02.548 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/28/23 22:13:02.548 Jan 28 22:13:02.747: INFO: Getting bootstrap-e2e-minion-group-gw8s Jan 28 22:13:02.747: INFO: Getting bootstrap-e2e-minion-group-jdvv Jan 28 22:13:02.747: INFO: Getting bootstrap-e2e-minion-group-rndd Jan 28 22:13:02.795: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-rndd condition Ready to be true Jan 28 22:13:02.796: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-jdvv condition Ready to be true Jan 28 22:13:02.796: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-gw8s condition Ready to be true Jan 28 22:13:02.841: INFO: Node bootstrap-e2e-minion-group-jdvv has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-rtgpq kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0] Jan 28 22:13:02.841: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-rtgpq kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0] Jan 28 22:13:02.841: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:13:02.841: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:13:02.841: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xp6b5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:13:02.841: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-rtgpq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:13:02.842: INFO: Node bootstrap-e2e-minion-group-rndd has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:13:02.842: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:13:02.842: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8gbc7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:13:02.842: INFO: Node bootstrap-e2e-minion-group-gw8s has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:13:02.842: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:13:02.842: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xkczn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:13:02.842: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:13:02.842: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-rndd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:13:02.886: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=true. Elapsed: 45.78659ms Jan 28 22:13:02.886: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" satisfied condition "running and ready, or succeeded" Jan 28 22:13:02.889: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.851915ms Jan 28 22:13:02.889: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 48.679264ms Jan 28 22:13:02.889: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:02.889: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:02.891: INFO: Pod "metadata-proxy-v0.1-xp6b5": Phase="Running", Reason="", readiness=true. Elapsed: 50.274018ms Jan 28 22:13:02.891: INFO: Pod "metadata-proxy-v0.1-xp6b5" satisfied condition "running and ready, or succeeded" Jan 28 22:13:02.895: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s": Phase="Running", Reason="", readiness=true. Elapsed: 52.191648ms Jan 28 22:13:02.895: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" satisfied condition "running and ready, or succeeded" Jan 28 22:13:02.895: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=true. Elapsed: 52.412346ms Jan 28 22:13:02.895: INFO: Pod "metadata-proxy-v0.1-8gbc7" satisfied condition "running and ready, or succeeded" Jan 28 22:13:02.895: INFO: Pod "metadata-proxy-v0.1-xkczn": Phase="Running", Reason="", readiness=true. Elapsed: 52.421444ms Jan 28 22:13:02.895: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=true. Elapsed: 52.26248ms Jan 28 22:13:02.895: INFO: Pod "metadata-proxy-v0.1-xkczn" satisfied condition "running and ready, or succeeded" Jan 28 22:13:02.895: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd" satisfied condition "running and ready, or succeeded" Jan 28 22:13:02.895: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:13:02.895: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:13:02.895: INFO: Getting external IP address for bootstrap-e2e-minion-group-rndd Jan 28 22:13:02.895: INFO: Getting external IP address for bootstrap-e2e-minion-group-gw8s Jan 28 22:13:02.895: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-rndd(34.145.37.78:22) Jan 28 22:13:02.895: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-gw8s(34.105.20.128:22) Jan 28 22:13:03.427: INFO: ssh prow@34.145.37.78:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 22:13:03.427: INFO: ssh prow@34.145.37.78:22: stdout: "" Jan 28 22:13:03.427: INFO: ssh prow@34.145.37.78:22: stderr: "" Jan 28 22:13:03.427: INFO: ssh prow@34.145.37.78:22: exit code: 0 Jan 28 22:13:03.427: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-rndd condition Ready to be false Jan 28 22:13:03.448: INFO: ssh prow@34.105.20.128:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 22:13:03.448: INFO: ssh prow@34.105.20.128:22: stdout: "" Jan 28 22:13:03.448: INFO: ssh prow@34.105.20.128:22: stderr: "" Jan 28 22:13:03.448: INFO: ssh prow@34.105.20.128:22: exit code: 0 Jan 28 22:13:03.448: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-gw8s condition Ready to be false Jan 28 22:13:03.469: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:03.490: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:04.936: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.095196731s Jan 28 22:13:04.936: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:04.936: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095166732s Jan 28 22:13:04.936: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:05.512: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:05.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:06.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.091444451s Jan 28 22:13:06.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:06.933: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092487489s Jan 28 22:13:06.933: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:07.555: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:07.576: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:08.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.091893063s Jan 28 22:13:08.933: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:08.934: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09312664s Jan 28 22:13:08.934: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:09.597: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:09.619: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:10.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.091738785s Jan 28 22:13:10.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:10.934: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092888525s Jan 28 22:13:10.934: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:11.640: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:11.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:12.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.091805843s Jan 28 22:13:12.932: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.091640087s Jan 28 22:13:12.932: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:12.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:13.683: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:13.706: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:14.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.091442806s Jan 28 22:13:14.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:14.933: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.092548156s Jan 28 22:13:14.933: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:15.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:15.749: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:16.933: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.092369525s Jan 28 22:13:16.933: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:16.934: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.093467432s Jan 28 22:13:16.934: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:17.768: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:17.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:18.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.091847053s Jan 28 22:13:18.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:18.934: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 16.092925994s Jan 28 22:13:18.934: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:19.811: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:19.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:20.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.091898856s Jan 28 22:13:20.933: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:20.934: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 18.092952432s Jan 28 22:13:20.934: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:21.854: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:21.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:22.933: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 20.092019114s Jan 28 22:13:22.933: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:22.933: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.092270253s Jan 28 22:13:22.933: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:23.896: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:23.920: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:24.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.091591658s Jan 28 22:13:24.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:24.933: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 22.0926972s Jan 28 22:13:24.933: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:25.939: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:25.962: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:26.933: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.092229608s Jan 28 22:13:26.933: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:26.934: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 24.093204075s Jan 28 22:13:26.934: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:27.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:28.005: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:28.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.091463526s Jan 28 22:13:28.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:28.933: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 26.092618881s Jan 28 22:13:28.933: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:30.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:30.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:30.932: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 28.091309792s Jan 28 22:13:30.932: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:30.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.091603993s Jan 28 22:13:30.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:32.134: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:32.136: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:32.932: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 30.091481114s Jan 28 22:13:32.932: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.091665476s Jan 28 22:13:32.932: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:13:32.932: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:12:13 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:13:34.180: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:34.180: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:13:34.930: INFO: Encountered non-retryable error while getting pod kube-system/volume-snapshot-controller-0: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:13:34.930: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 28 22:13:34.930: INFO: Encountered non-retryable error while getting pod kube-system/kube-dns-autoscaler-5f6455f985-rtgpq: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-rtgpq": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:13:34.930: INFO: Pod kube-dns-autoscaler-5f6455f985-rtgpq failed to be running and ready, or succeeded. Jan 28 22:13:34.930: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-dns-autoscaler-5f6455f985-rtgpq kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0] Jan 28 22:13:34.930: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-rtgpq: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:05:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:05:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:38 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP: PodIPs:[] StartTime:2023-01-28 21:53:38 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:&ContainerStateWaiting{Reason:,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:3 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://327aa9b55c426f26dbce218ae381d10dc0d1de28e736fd47f30215df0e91d6b7 Started:0xc004b4710a}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 22:13:34.970: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-rtgpq/autoscaler, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-rtgpq/log?container=autoscaler&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 22:13:34.970: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-rtgpq/autoscaler, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-rtgpq/log?container=autoscaler&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 22:13:34.970: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:12:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:12:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:38 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP:10.64.3.36 PodIPs:[{IP:10.64.3.36}] StartTime:2023-01-28 21:53:38 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 2m40s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(699caeb5-2b49-4d25-998b-e11af5bff8c6),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 22:12:05 +0000 UTC,FinishedAt:2023-01-28 22:12:13 +0000 UTC,ContainerID:containerd://b8bee3deb5864048b1b587ef59dfb1c4aede245df9bf2e280cfb87b1c723e79f,}} Ready:false RestartCount:11 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://b8bee3deb5864048b1b587ef59dfb1c4aede245df9bf2e280cfb87b1c723e79f Started:0xc004b47b0f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 28 22:13:35.009: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 22:13:35.009: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 22:13:36.220: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:36.220: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:38.260: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:38.260: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:40.301: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:40.301: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:42.340: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:42.340: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:44.380: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:44.380: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:46.421: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:46.421: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:48.461: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:48.461: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:50.502: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:50.502: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:52.541: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:52.541: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:54.581: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:54.581: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:56.621: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:56.621: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:13:58.661: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:13:58.661: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:00.702: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:00.702: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:02.742: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:02.742: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:04.782: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:04.782: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:06.822: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:06.822: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:08.862: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:08.862: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:10.902: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:10.902: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:12.942: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:12.942: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:14.982: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:14.982: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:17.022: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:17.022: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:19.063: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:19.063: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:21.102: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:21.102: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:23.142: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:23.142: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:25.183: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:25.183: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:27.223: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:27.223: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:29.263: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:29.266: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:31.303: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:31.306: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:33.344: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:33.348: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:35.388: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:35.391: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:37.428: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:37.431: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:39.467: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:39.471: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:41.508: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:41.510: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:43.549: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:43.550: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:45.589: INFO: Couldn't get node bootstrap-e2e-minion-group-rndd Jan 28 22:14:45.589: INFO: Couldn't get node bootstrap-e2e-minion-group-gw8s Jan 28 22:14:52.067: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:14:52.067: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:14:54.125: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:14:54.127: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:14:56.172: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:14:56.172: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:14:58.216: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:14:58.216: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:15:00.261: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:15:00.261: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:15:02.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:15:02.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:15:04.308: INFO: Node bootstrap-e2e-minion-group-gw8s didn't reach desired Ready condition status (false) within 2m0s Jan 28 22:15:04.308: INFO: Node bootstrap-e2e-minion-group-rndd didn't reach desired Ready condition status (false) within 2m0s Jan 28 22:15:04.308: INFO: Node bootstrap-e2e-minion-group-gw8s failed reboot test. Jan 28 22:15:04.308: INFO: Node bootstrap-e2e-minion-group-jdvv failed reboot test. Jan 28 22:15:04.308: INFO: Node bootstrap-e2e-minion-group-rndd failed reboot test. Jan 28 22:15:04.308: INFO: Executing termination hook on nodes Jan 28 22:15:04.308: INFO: Getting external IP address for bootstrap-e2e-minion-group-gw8s Jan 28 22:15:04.308: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-gw8s(34.105.20.128:22) Jan 28 22:15:20.250: INFO: ssh prow@34.105.20.128:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 22:15:20.250: INFO: ssh prow@34.105.20.128:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 22:13:13 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 22:15:20.250: INFO: ssh prow@34.105.20.128:22: stderr: "" Jan 28 22:15:20.250: INFO: ssh prow@34.105.20.128:22: exit code: 0 Jan 28 22:15:20.250: INFO: Getting external IP address for bootstrap-e2e-minion-group-jdvv Jan 28 22:15:20.250: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-jdvv(34.127.24.56:22) Jan 28 22:15:20.795: INFO: ssh prow@34.127.24.56:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 22:15:20.795: INFO: ssh prow@34.127.24.56:22: stdout: "" Jan 28 22:15:20.795: INFO: ssh prow@34.127.24.56:22: stderr: "cat: /tmp/drop-inbound.log: No such file or directory\n" Jan 28 22:15:20.795: INFO: ssh prow@34.127.24.56:22: exit code: 1 Jan 28 22:15:20.795: INFO: Error while issuing ssh command: failed running "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log": <nil> (exit code 1, stderr cat: /tmp/drop-inbound.log: No such file or directory ) Jan 28 22:15:20.795: INFO: Getting external IP address for bootstrap-e2e-minion-group-rndd Jan 28 22:15:20.795: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-rndd(34.145.37.78:22) Jan 28 22:15:21.310: INFO: ssh prow@34.145.37.78:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 22:15:21.310: INFO: ssh prow@34.145.37.78:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 22:13:13 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 22:15:21.310: INFO: ssh prow@34.145.37.78:22: stderr: "" Jan 28 22:15:21.310: INFO: ssh prow@34.145.37.78:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 22:15:21.31 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/28/23 22:15:21.31 (2m18.762s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:15:21.31 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 22:15:21.31 Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-77sdd to bootstrap-e2e-minion-group-gw8s Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.010007141s (1.010017589s including waiting) Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-77sdd Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-77sdd Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Unhealthy: Readiness probe failed: Get "http://10.64.2.9:8181/ready": dial tcp 10.64.2.9:8181: connect: connection refused Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-77sdd_kube-system(db0c09f1-c4d8-4e56-ab71-b0803b234d20) Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-77sdd Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Unhealthy: Readiness probe failed: Get "http://10.64.2.14:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Unhealthy: Liveness probe failed: Get "http://10.64.2.14:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Container coredns failed liveness probe, will be restarted Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Failed: Error: failed to get sandbox container task: no running task found: task 0b75b4d5d974b9f432b7e10e9d71af104dc8c2ddc0133e5a5cc1e268788ff5fc not found: not found Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-77sdd_kube-system(db0c09f1-c4d8-4e56-ab71-b0803b234d20) Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-8xrbf to bootstrap-e2e-minion-group-jdvv Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.107628334s (2.107641232s including waiting) Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Readiness probe failed: Get "http://10.64.3.15:8181/ready": dial tcp 10.64.3.15:8181: connect: connection refused Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-8xrbf_kube-system(f16a4d9b-c0c6-4f1c-94d6-b9a2f091b21e) Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Readiness probe failed: Get "http://10.64.3.20:8181/ready": dial tcp 10.64.3.20:8181: connect: connection refused Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container coredns Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Readiness probe failed: Get "http://10.64.3.28:8181/ready": dial tcp 10.64.3.28:8181: connect: connection refused Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-8xrbf_kube-system(f16a4d9b-c0c6-4f1c-94d6-b9a2f091b21e) Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Readiness probe failed: Get "http://10.64.3.31:8181/ready": dial tcp 10.64.3.31:8181: connect: connection refused Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-8xrbf Jan 28 22:15:21.403: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-77sdd Jan 28 22:15:21.403: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 22:15:21.403: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 22:15:21.403: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 22:15:21.403: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 22:15:21.403: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 22:15:21.403: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 28 22:15:21.403: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 28 22:15:21.403: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 28 22:15:21.403: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 22:15:21.403: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 22:15:21.403: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 28 22:15:21.403: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b3a39 became leader Jan 28 22:15:21.403: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_5712c became leader Jan 28 22:15:21.403: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_da42f became leader Jan 28 22:15:21.403: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ac498 became leader Jan 28 22:15:21.403: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3983d became leader Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-npfvc to bootstrap-e2e-minion-group-gw8s Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 620.414125ms (620.448513ms including waiting) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-npfvc_kube-system(cd16d88d-4ef4-4c9a-96df-86fb4c70ef13) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-npfvc_kube-system(cd16d88d-4ef4-4c9a-96df-86fb4c70ef13) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-t5bmd to bootstrap-e2e-minion-group-jdvv Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.384242476s (1.38425164s including waiting) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-t5bmd_kube-system(07681149-8b9c-4c0d-bb8b-75eaf2c0c570) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-t5bmd_kube-system(07681149-8b9c-4c0d-bb8b-75eaf2c0c570) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Liveness probe failed: Get "http://10.64.3.30:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-twq5s to bootstrap-e2e-minion-group-rndd Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 602.431484ms (602.449236ms including waiting) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Killing: Stopping container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-twq5s_kube-system(de9ecb8f-d586-41fd-a04d-41f45f7ea0bf) Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container konnectivity-agent Jan 28 22:15:21.403: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-t5bmd Jan 28 22:15:21.403: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-twq5s Jan 28 22:15:21.403: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-npfvc Jan 28 22:15:21.403: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 22:15:21.403: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 22:15:21.403: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 28 22:15:21.403: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 28 22:15:21.403: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 28 22:15:21.403: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 28 22:15:21.403: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 28 22:15:21.403: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 28 22:15:21.403: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 28 22:15:21.403: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 28 22:15:21.403: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 22:15:21.403: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 22:15:21.403: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 28 22:15:21.403: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 22:15:21.403: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 28 22:15:21.403: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 28 22:15:21.403: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 28 22:15:21.403: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 28 22:15:21.403: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_06832965-97e5-41e3-bec8-383d2f8deac1 became leader Jan 28 22:15:21.403: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_58fabdfe-fbc6-41f6-b7ec-99d45b3aed32 became leader Jan 28 22:15:21.403: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ade39950-f49a-4ad2-9cde-b63a206669f4 became leader Jan 28 22:15:21.403: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_d447a835-496d-4a2e-83eb-b73c743e1937 became leader Jan 28 22:15:21.403: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_d16b72c0-55fe-496a-92f4-0ae27f545a10 became leader Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-rtgpq to bootstrap-e2e-minion-group-jdvv Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.593488011s (1.5934971s including waiting) Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container autoscaler Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container autoscaler Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container autoscaler Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-rtgpq_kube-system(5025b848-8fb4-4098-be54-5a4a0668cef3) Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985-rtgpq: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-rtgpq Jan 28 22:15:21.403: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-gw8s_kube-system(08a4c2012f0274262c3fc9dc7f0563a7) Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-gw8s: {kubelet bootstrap-e2e-minion-group-gw8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-gw8s_kube-system(08a4c2012f0274262c3fc9dc7f0563a7) Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-jdvv_kube-system(e126030fe08b481bd93bca8e2433b514) Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jdvv: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-jdvv_kube-system(e126030fe08b481bd93bca8e2433b514) Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Killing: Stopping container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-rndd_kube-system(d885339590022f41c030a27cba4cc12d) Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Killing: Stopping container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} Killing: Stopping container kube-proxy Jan 28 22:15:21.403: INFO: event for kube-proxy-bootstrap-e2e-minion-group-rndd: {kubelet bootstrap-e2e-minion-group-rndd} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-rndd_kube-system(d885339590022f41c030a27cba4cc12d) Jan 28 22:15:21.403: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 28 22:15:21.403: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 28 22:15:21.403: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 28 22:15:21.403: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_dbab41bb-932f-4b7b-a4b6-50c01fb48deb became leader Jan 28 22:15:21.403: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_fd78c921-38c9-42ac-b405-1d075f95bdbf became leader Jan 28 22:15:21.403: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_2cf81737-e3b5-4d96-ad71-51da341fe4b2 became leader Jan 28 22:15:21.403: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_a84201d0-0d8a-4734-b5b0-ec434b4709ca became leader Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-stdz9 to bootstrap-e2e-minion-group-jdvv Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 717.987767ms (717.996689ms including waiting) Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container default-http-backend Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container default-http-backend Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container default-http-backend Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99-stdz9: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container default-http-backend Jan 28 22:15:21.403: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-stdz9 Jan 28 22:15:21.403: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 28 22:15:21.403: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 28 22:15:21.403: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 28 22:15:21.403: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 28 22:15:21.403: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 28 22:15:21.403: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 28 22:15:21.403: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://10.138.0.2:8086/healthz": dial tcp 10.138.0.2:8086: connect: connection refused Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-8gbc7 to bootstrap-e2e-minion-group-rndd Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 732.758665ms (732.778804ms including waiting) Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.847867171s (1.847875747s including waiting) Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-8gbc7: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-k6tg5 to bootstrap-e2e-master Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 726.879318ms (726.885896ms including waiting) Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.308701037s (2.308707341s including waiting) Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-k6tg5: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-xkczn to bootstrap-e2e-minion-group-gw8s Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 739.300498ms (739.324026ms including waiting) Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.989961468s (1.98997004s including waiting) Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xkczn: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-xp6b5 to bootstrap-e2e-minion-group-jdvv Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 766.016998ms (766.038857ms including waiting) Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.049975938s (2.049992955s including waiting) Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container metadata-proxy Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1-xp6b5: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container prometheus-to-sd-exporter Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-k6tg5 Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-8gbc7 Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-xkczn Jan 28 22:15:21.403: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-xp6b5 Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-bxdnp to bootstrap-e2e-minion-group-jdvv Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.40352739s (2.403537151s including waiting) Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container metrics-server Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container metrics-server Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.980585574s (1.980593738s including waiting) Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container metrics-server-nanny Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container metrics-server-nanny Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container metrics-server Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container metrics-server-nanny Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-6764bf875c-bxdnp_kube-system(87cfd581-c05b-43b9-84ca-d52a66620447) Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c-bxdnp: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-6764bf875c-bxdnp_kube-system(87cfd581-c05b-43b9-84ca-d52a66620447) Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-bxdnp Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-bxdnp Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-hqtr6 to bootstrap-e2e-minion-group-rndd Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.306501676s (1.306517306s including waiting) Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metrics-server Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metrics-server Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 973.374731ms (973.390393ms including waiting) Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metrics-server-nanny Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metrics-server-nanny Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Killing: Stopping container metrics-server Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Killing: Stopping container metrics-server-nanny Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Killing: Container metrics-server failed liveness probe, will be restarted Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Failed: Error: failed to get sandbox container task: no running task found: task 71079ec71b556a610f487b9fe75651f31afafb33c20ca14a5f922d3ed9aa5de2 not found: not found Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-hqtr6 Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metrics-server Jan 28 22:15:21.403: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metrics-server Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metrics-server-nanny Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metrics-server-nanny Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Readiness probe failed: Get "https://10.64.1.12:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Readiness probe failed: Get "https://10.64.1.12:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Liveness probe failed: Get "https://10.64.1.12:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metrics-server Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metrics-server Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container metrics-server-nanny Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container metrics-server-nanny Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Readiness probe failed: Get "https://10.64.1.14:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9-hqtr6: {kubelet bootstrap-e2e-minion-group-rndd} Unhealthy: Liveness probe failed: Get "https://10.64.1.14:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-hqtr6 Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 28 22:15:21.404: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-jdvv Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.411688467s (2.411696734s including waiting) Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container volume-snapshot-controller Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container volume-snapshot-controller Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container volume-snapshot-controller Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(699caeb5-2b49-4d25-998b-e11af5bff8c6) Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container volume-snapshot-controller Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container volume-snapshot-controller Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container volume-snapshot-controller Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(699caeb5-2b49-4d25-998b-e11af5bff8c6) Jan 28 22:15:21.404: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:15:21.404 (94ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 22:15:21.404 Jan 28 22:15:21.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 22:15:21.45 (47ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 22:15:21.45 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 22:15:21.45 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 22:15:21.45 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 22:15:21.451 STEP: Collecting events from namespace "reboot-6388". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 22:15:21.451 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/28/23 22:15:21.491 Jan 28 22:15:21.533: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 22:15:21.533: INFO: Jan 28 22:15:21.579: INFO: Logging node info for node bootstrap-e2e-master Jan 28 22:15:21.630: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 8b296ac8-60af-491a-9f1c-cf2d7db0caac 3090 0 2023-01-28 21:53:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 21:53:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-28 21:53:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-28 21:53:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 22:14:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-protobuf/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 21:53:38 +0000 UTC,LastTransitionTime:2023-01-28 21:53:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 22:14:52 +0000 UTC,LastTransitionTime:2023-01-28 21:53:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 22:14:52 +0000 UTC,LastTransitionTime:2023-01-28 21:53:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 22:14:52 +0000 UTC,LastTransitionTime:2023-01-28 21:53:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 22:14:52 +0000 UTC,LastTransitionTime:2023-01-28 21:53:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.109.193,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:07ffb543bedc149534b2440709dac943,SystemUUID:07ffb543-bedc-1495-34b2-440709dac943,BootID:c11957ee-9d65-4212-80d4-e60012976419,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 22:15:21.630: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 28 22:15:21.676: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 28 22:15:21.741: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-28 21:52:31 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:21.741: INFO: Container konnectivity-server-container ready: true, restart count 4 Jan 28 22:15:21.741: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-28 21:52:31 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:21.741: INFO: Container kube-apiserver ready: true, restart count 3 Jan 28 22:15:21.741: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-28 21:52:50 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:21.741: INFO: Container kube-addon-manager ready: true, restart count 4 Jan 28 22:15:21.741: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-28 21:52:50 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:21.741: INFO: Container l7-lb-controller ready: true, restart count 7 Jan 28 22:15:21.741: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-28 21:52:31 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:21.741: INFO: Container etcd-container ready: true, restart count 1 Jan 28 22:15:21.741: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-28 21:52:31 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:21.741: INFO: Container etcd-container ready: true, restart count 2 Jan 28 22:15:21.741: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-28 21:52:31 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:21.741: INFO: Container kube-controller-manager ready: false, restart count 6 Jan 28 22:15:21.741: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-28 21:52:31 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:21.741: INFO: Container kube-scheduler ready: true, restart count 4 Jan 28 22:15:21.741: INFO: metadata-proxy-v0.1-k6tg5 started at 2023-01-28 21:53:17 +0000 UTC (0+2 container statuses recorded) Jan 28 22:15:21.741: INFO: Container metadata-proxy ready: true, restart count 0 Jan 28 22:15:21.741: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 28 22:15:21.922: INFO: Latency metrics for node bootstrap-e2e-master Jan 28 22:15:21.922: INFO: Logging node info for node bootstrap-e2e-minion-group-gw8s Jan 28 22:15:21.964: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-gw8s 8baa011e-ebc5-4ddb-b17f-5306514d9570 3051 0 2023-01-28 21:53:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-gw8s kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 21:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 22:07:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 22:08:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-28 22:13:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-28 22:13:10 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-protobuf/us-west1-b/bootstrap-e2e-minion-group-gw8s,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 22:13:05 +0000 UTC,LastTransitionTime:2023-01-28 22:08:03 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 22:13:05 +0000 UTC,LastTransitionTime:2023-01-28 22:08:03 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 22:13:05 +0000 UTC,LastTransitionTime:2023-01-28 22:08:03 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 22:13:05 +0000 UTC,LastTransitionTime:2023-01-28 22:08:03 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 22:13:05 +0000 UTC,LastTransitionTime:2023-01-28 22:08:03 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 22:13:05 +0000 UTC,LastTransitionTime:2023-01-28 22:08:03 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 22:13:05 +0000 UTC,LastTransitionTime:2023-01-28 22:08:03 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 21:53:38 +0000 UTC,LastTransitionTime:2023-01-28 21:53:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 22:13:10 +0000 UTC,LastTransitionTime:2023-01-28 22:08:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 22:13:10 +0000 UTC,LastTransitionTime:2023-01-28 22:08:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 22:13:10 +0000 UTC,LastTransitionTime:2023-01-28 22:08:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 22:13:10 +0000 UTC,LastTransitionTime:2023-01-28 22:08:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.105.20.128,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-gw8s.c.k8s-jkns-e2e-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-gw8s.c.k8s-jkns-e2e-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a1fe5d1cff5799c1dfa37127e8163c9f,SystemUUID:a1fe5d1c-ff57-99c1-dfa3-7127e8163c9f,BootID:e9a93669-a286-41e2-ada3-f1d44aab47fb,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 22:15:21.965: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-gw8s Jan 28 22:15:22.011: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-gw8s Jan 28 22:15:22.119: INFO: kube-proxy-bootstrap-e2e-minion-group-gw8s started at 2023-01-28 21:53:21 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.119: INFO: Container kube-proxy ready: true, restart count 6 Jan 28 22:15:22.119: INFO: metadata-proxy-v0.1-xkczn started at 2023-01-28 21:53:22 +0000 UTC (0+2 container statuses recorded) Jan 28 22:15:22.119: INFO: Container metadata-proxy ready: true, restart count 2 Jan 28 22:15:22.119: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 28 22:15:22.119: INFO: konnectivity-agent-npfvc started at 2023-01-28 21:53:38 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.119: INFO: Container konnectivity-agent ready: false, restart count 6 Jan 28 22:15:22.119: INFO: coredns-6846b5b5f-77sdd started at 2023-01-28 21:53:44 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.119: INFO: Container coredns ready: true, restart count 5 Jan 28 22:15:22.286: INFO: Latency metrics for node bootstrap-e2e-minion-group-gw8s Jan 28 22:15:22.286: INFO: Logging node info for node bootstrap-e2e-minion-group-jdvv Jan 28 22:15:22.330: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-jdvv b8bb8d18-5ed7-4907-be37-ee1dfaa09d07 3106 0 2023-01-28 21:53:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-jdvv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 21:53:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 21:57:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 22:05:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 22:10:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 22:14:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-protobuf/us-west1-b/bootstrap-e2e-minion-group-jdvv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 22:14:58 +0000 UTC,LastTransitionTime:2023-01-28 22:04:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 22:14:58 +0000 UTC,LastTransitionTime:2023-01-28 22:04:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 22:14:58 +0000 UTC,LastTransitionTime:2023-01-28 22:04:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 22:14:58 +0000 UTC,LastTransitionTime:2023-01-28 22:04:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 22:14:58 +0000 UTC,LastTransitionTime:2023-01-28 22:04:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 22:14:58 +0000 UTC,LastTransitionTime:2023-01-28 22:04:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 22:14:58 +0000 UTC,LastTransitionTime:2023-01-28 22:04:24 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 21:53:38 +0000 UTC,LastTransitionTime:2023-01-28 21:53:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 22:10:36 +0000 UTC,LastTransitionTime:2023-01-28 21:58:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 22:10:36 +0000 UTC,LastTransitionTime:2023-01-28 21:58:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 22:10:36 +0000 UTC,LastTransitionTime:2023-01-28 21:58:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 22:10:36 +0000 UTC,LastTransitionTime:2023-01-28 22:05:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.127.24.56,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-jdvv.c.k8s-jkns-e2e-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-jdvv.c.k8s-jkns-e2e-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:513e46be1ccc0a1da13599e75884edf8,SystemUUID:513e46be-1ccc-0a1d-a135-99e75884edf8,BootID:10d1a8a6-5c8f-4588-900d-6a51d3348d1a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 22:15:22.330: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-jdvv Jan 28 22:15:22.376: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-jdvv Jan 28 22:15:22.453: INFO: kube-proxy-bootstrap-e2e-minion-group-jdvv started at 2023-01-28 21:53:22 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.453: INFO: Container kube-proxy ready: true, restart count 8 Jan 28 22:15:22.453: INFO: l7-default-backend-8549d69d99-stdz9 started at 2023-01-28 21:53:38 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.453: INFO: Container default-http-backend ready: true, restart count 1 Jan 28 22:15:22.453: INFO: volume-snapshot-controller-0 started at 2023-01-28 21:53:38 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.453: INFO: Container volume-snapshot-controller ready: false, restart count 11 Jan 28 22:15:22.453: INFO: kube-dns-autoscaler-5f6455f985-rtgpq started at 2023-01-28 21:53:38 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.453: INFO: Container autoscaler ready: false, restart count 3 Jan 28 22:15:22.453: INFO: coredns-6846b5b5f-8xrbf started at 2023-01-28 21:53:38 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.453: INFO: Container coredns ready: true, restart count 6 Jan 28 22:15:22.453: INFO: metadata-proxy-v0.1-xp6b5 started at 2023-01-28 21:53:23 +0000 UTC (0+2 container statuses recorded) Jan 28 22:15:22.453: INFO: Container metadata-proxy ready: true, restart count 1 Jan 28 22:15:22.453: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 28 22:15:22.453: INFO: konnectivity-agent-t5bmd started at 2023-01-28 21:53:38 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.453: INFO: Container konnectivity-agent ready: false, restart count 7 Jan 28 22:15:22.628: INFO: Latency metrics for node bootstrap-e2e-minion-group-jdvv Jan 28 22:15:22.628: INFO: Logging node info for node bootstrap-e2e-minion-group-rndd Jan 28 22:15:22.671: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-rndd 41be9bbb-23a1-4f74-99dc-2ef115465238 3054 0 2023-01-28 21:53:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-rndd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 21:53:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 22:07:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 22:08:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-28 22:13:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-28 22:13:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-protobuf/us-west1-b/bootstrap-e2e-minion-group-rndd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 22:13:07 +0000 UTC,LastTransitionTime:2023-01-28 22:08:05 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 22:13:07 +0000 UTC,LastTransitionTime:2023-01-28 22:08:05 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 22:13:07 +0000 UTC,LastTransitionTime:2023-01-28 22:08:05 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 22:13:07 +0000 UTC,LastTransitionTime:2023-01-28 22:08:05 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 22:13:07 +0000 UTC,LastTransitionTime:2023-01-28 22:08:05 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 22:13:07 +0000 UTC,LastTransitionTime:2023-01-28 22:08:05 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 22:13:07 +0000 UTC,LastTransitionTime:2023-01-28 22:08:05 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 21:53:38 +0000 UTC,LastTransitionTime:2023-01-28 21:53:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 22:13:12 +0000 UTC,LastTransitionTime:2023-01-28 22:08:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 22:13:12 +0000 UTC,LastTransitionTime:2023-01-28 22:08:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 22:13:12 +0000 UTC,LastTransitionTime:2023-01-28 22:08:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 22:13:12 +0000 UTC,LastTransitionTime:2023-01-28 22:08:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.145.37.78,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-rndd.c.k8s-jkns-e2e-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-rndd.c.k8s-jkns-e2e-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6859cc7a546c71de83c075bc57ce869e,SystemUUID:6859cc7a-546c-71de-83c0-75bc57ce869e,BootID:e868ec98-cb99-4706-a359-8636a9af2027,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 22:15:22.671: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-rndd Jan 28 22:15:22.717: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-rndd Jan 28 22:15:22.795: INFO: kube-proxy-bootstrap-e2e-minion-group-rndd started at 2023-01-28 21:53:20 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.795: INFO: Container kube-proxy ready: true, restart count 10 Jan 28 22:15:22.795: INFO: metadata-proxy-v0.1-8gbc7 started at 2023-01-28 21:53:21 +0000 UTC (0+2 container statuses recorded) Jan 28 22:15:22.795: INFO: Container metadata-proxy ready: true, restart count 2 Jan 28 22:15:22.795: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 28 22:15:22.795: INFO: konnectivity-agent-twq5s started at 2023-01-28 21:53:38 +0000 UTC (0+1 container statuses recorded) Jan 28 22:15:22.795: INFO: Container konnectivity-agent ready: true, restart count 7 Jan 28 22:15:22.795: INFO: metrics-server-v0.5.2-867b8754b9-hqtr6 started at 2023-01-28 21:54:18 +0000 UTC (0+2 container statuses recorded) Jan 28 22:15:22.795: INFO: Container metrics-server ready: false, restart count 9 Jan 28 22:15:22.795: INFO: Container metrics-server-nanny ready: false, restart count 9 Jan 28 22:15:22.961: INFO: Latency metrics for node bootstrap-e2e-minion-group-rndd END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 22:15:22.961 (1.51s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 22:15:22.961 (1.511s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 22:15:22.961 STEP: Destroying namespace "reboot-6388" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 22:15:22.961 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 22:15:23.005 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 22:15:23.006 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 22:15:23.006 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 22:00:12.714 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:59:42.593 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:59:42.593 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:59:42.593 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 21:59:42.593 Jan 28 21:59:42.593: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 21:59:42.595 Jan 28 21:59:42.634: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:44.674: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:46.674: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:48.675: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:50.676: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:52.674: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:54.674: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:56.674: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:58.674: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:00.675: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:02.675: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:04.675: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:06.675: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:08.675: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:10.676: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:12.675: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:12.714: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:12.714: INFO: Unexpected error: <*errors.errorString | 0xc000207c90>: { s: "timed out waiting for the condition", } [FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 22:00:12.714 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 22:00:12.714 (30.121s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:00:12.714 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 22:00:12.714 Jan 28 22:00:12.754: INFO: Unexpected error: <*url.Error | 0xc00373a000>: { Op: "Get", URL: "https://35.230.109.193/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc003da6000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0035a1e90>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 109, 193], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000ec4020>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.230.109.193/api/v1/namespaces/kube-system/events": dial tcp 35.230.109.193:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 22:00:12.754 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:00:12.754 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 22:00:12.754 Jan 28 22:00:12.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 22:00:12.793 (39ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 22:00:12.793 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 22:00:12.793 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 22:00:12.793 (0s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 22:00:12.793 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 22:00:12.793 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 22:00:12.793 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 22:00:12.793 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 22:00:12.793 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 22:00:12.714 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:59:42.593 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:59:42.593 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:59:42.593 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 21:59:42.593 Jan 28 21:59:42.593: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 21:59:42.595 Jan 28 21:59:42.634: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:44.674: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:46.674: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:48.675: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:50.676: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:52.674: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:54.674: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:56.674: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:58.674: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:00.675: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:02.675: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:04.675: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:06.675: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:08.675: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:10.676: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:12.675: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:12.714: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:00:12.714: INFO: Unexpected error: <*errors.errorString | 0xc000207c90>: { s: "timed out waiting for the condition", } [FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 22:00:12.714 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 22:00:12.714 (30.121s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:00:12.714 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 22:00:12.714 Jan 28 22:00:12.754: INFO: Unexpected error: <*url.Error | 0xc00373a000>: { Op: "Get", URL: "https://35.230.109.193/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc003da6000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0035a1e90>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 109, 193], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000ec4020>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.230.109.193/api/v1/namespaces/kube-system/events": dial tcp 35.230.109.193:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 22:00:12.754 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:00:12.754 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 22:00:12.754 Jan 28 22:00:12.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 22:00:12.793 (39ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 22:00:12.793 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 22:00:12.793 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 22:00:12.793 (0s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 22:00:12.793 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 22:00:12.793 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 22:00:12.793 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 22:00:12.793 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 22:00:12.793 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 22:12:05.192 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 22:11:35.07 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 22:11:35.07 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 22:11:35.07 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 22:11:35.071 Jan 28 22:11:35.071: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 22:11:35.073 Jan 28 22:11:35.112: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:37.153: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:39.152: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:41.152: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:43.155: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:45.154: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:47.152: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:49.153: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:51.152: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:53.152: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:55.153: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:57.155: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:59.152: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:01.152: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:03.152: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:05.153: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:05.192: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:05.192: INFO: Unexpected error: <*errors.errorString | 0xc000207c90>: { s: "timed out waiting for the condition", } [FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 22:12:05.192 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 22:12:05.192 (30.122s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:12:05.192 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 22:12:05.192 Jan 28 22:12:05.232: INFO: Unexpected error: <*url.Error | 0xc00373a7b0>: { Op: "Get", URL: "https://35.230.109.193/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc00196a140>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0058611d0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 109, 193], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0017a01c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.230.109.193/api/v1/namespaces/kube-system/events": dial tcp 35.230.109.193:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 22:12:05.232 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:12:05.232 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 22:12:05.232 Jan 28 22:12:05.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 22:12:05.271 (39ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 22:12:05.271 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 22:12:05.271 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 22:12:05.271 (0s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 22:12:05.271 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 22:12:05.271 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 22:12:05.271 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 22:12:05.271 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 22:12:05.271 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 22:12:05.192 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 22:11:35.07 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 22:11:35.07 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 22:11:35.07 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 22:11:35.071 Jan 28 22:11:35.071: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 22:11:35.073 Jan 28 22:11:35.112: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:37.153: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:39.152: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:41.152: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:43.155: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:45.154: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:47.152: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:49.153: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:51.152: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:53.152: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:55.153: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:57.155: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:59.152: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:01.152: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:03.152: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:05.153: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:05.192: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:12:05.192: INFO: Unexpected error: <*errors.errorString | 0xc000207c90>: { s: "timed out waiting for the condition", } [FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 22:12:05.192 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 22:12:05.192 (30.122s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:12:05.192 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 22:12:05.192 Jan 28 22:12:05.232: INFO: Unexpected error: <*url.Error | 0xc00373a7b0>: { Op: "Get", URL: "https://35.230.109.193/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc00196a140>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0058611d0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 109, 193], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0017a01c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.230.109.193/api/v1/namespaces/kube-system/events": dial tcp 35.230.109.193:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 22:12:05.232 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:12:05.232 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 22:12:05.232 Jan 28 22:12:05.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 22:12:05.271 (39ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 22:12:05.271 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 22:12:05.271 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 22:12:05.271 (0s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 22:12:05.271 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 22:12:05.271 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 22:12:05.271 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 22:12:05.271 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 22:12:05.271 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 22:11:34.9 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 22:06:44.239 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 22:06:44.239 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 22:06:44.239 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 22:06:44.239 Jan 28 22:06:44.239: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 22:06:44.24 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 22:06:44.369 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 22:06:44.45 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 22:06:44.533 (294ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 22:06:44.533 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 22:06:44.533 (0s) > Enter [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/28/23 22:06:44.533 Jan 28 22:06:44.680: INFO: Getting bootstrap-e2e-minion-group-gw8s Jan 28 22:06:44.680: INFO: Getting bootstrap-e2e-minion-group-rndd Jan 28 22:06:44.680: INFO: Getting bootstrap-e2e-minion-group-jdvv Jan 28 22:06:44.726: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-jdvv condition Ready to be true Jan 28 22:06:44.726: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-rndd condition Ready to be true Jan 28 22:06:44.726: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-gw8s condition Ready to be true Jan 28 22:06:44.772: INFO: Node bootstrap-e2e-minion-group-rndd has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8gbc7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:06:44.772: INFO: Node bootstrap-e2e-minion-group-jdvv has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-rtgpq kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0] Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-rtgpq kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0] Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-rndd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-rtgpq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:06:44.772: INFO: Node bootstrap-e2e-minion-group-gw8s has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xkczn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xp6b5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:06:44.817: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=true. Elapsed: 45.623519ms Jan 28 22:06:44.818: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd" satisfied condition "running and ready, or succeeded" Jan 28 22:06:44.818: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=true. Elapsed: 45.84245ms Jan 28 22:06:44.818: INFO: Pod "metadata-proxy-v0.1-8gbc7" satisfied condition "running and ready, or succeeded" Jan 28 22:06:44.818: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:06:44.818: INFO: Getting external IP address for bootstrap-e2e-minion-group-rndd Jan 28 22:06:44.818: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-rndd(34.145.37.78:22) Jan 28 22:06:44.820: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 48.461067ms Jan 28 22:06:44.820: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:06:44.821: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 48.714667ms Jan 28 22:06:44.821: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 22:06:44.821: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s": Phase="Running", Reason="", readiness=true. Elapsed: 49.164952ms Jan 28 22:06:44.821: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" satisfied condition "running and ready, or succeeded" Jan 28 22:06:44.821: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=true. Elapsed: 49.293961ms Jan 28 22:06:44.821: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" satisfied condition "running and ready, or succeeded" Jan 28 22:06:44.822: INFO: Pod "metadata-proxy-v0.1-xp6b5": Phase="Running", Reason="", readiness=true. Elapsed: 49.419454ms Jan 28 22:06:44.822: INFO: Pod "metadata-proxy-v0.1-xp6b5" satisfied condition "running and ready, or succeeded" Jan 28 22:06:44.822: INFO: Pod "metadata-proxy-v0.1-xkczn": Phase="Running", Reason="", readiness=true. Elapsed: 49.642505ms Jan 28 22:06:44.822: INFO: Pod "metadata-proxy-v0.1-xkczn" satisfied condition "running and ready, or succeeded" Jan 28 22:06:44.822: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:06:44.822: INFO: Getting external IP address for bootstrap-e2e-minion-group-gw8s Jan 28 22:06:44.822: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-gw8s(34.105.20.128:22) Jan 28 22:06:45.345: INFO: ssh prow@34.145.37.78:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 28 22:06:45.345: INFO: ssh prow@34.145.37.78:22: stdout: "" Jan 28 22:06:45.345: INFO: ssh prow@34.145.37.78:22: stderr: "" Jan 28 22:06:45.345: INFO: ssh prow@34.145.37.78:22: exit code: 0 Jan 28 22:06:45.345: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-rndd condition Ready to be false Jan 28 22:06:45.358: INFO: ssh prow@34.105.20.128:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 28 22:06:45.358: INFO: ssh prow@34.105.20.128:22: stdout: "" Jan 28 22:06:45.358: INFO: ssh prow@34.105.20.128:22: stderr: "" Jan 28 22:06:45.358: INFO: ssh prow@34.105.20.128:22: exit code: 0 Jan 28 22:06:45.358: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-gw8s condition Ready to be false Jan 28 22:06:45.387: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:45.400: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:46.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090751852s Jan 28 22:06:46.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:06:47.431: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:47.443: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:48.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091573882s Jan 28 22:06:48.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:06:49.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:49.488: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:50.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091810249s Jan 28 22:06:50.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:06:51.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:51.531: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:52.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09092878s Jan 28 22:06:52.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:06:53.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:53.574: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:54.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.090092486s Jan 28 22:06:54.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:06:55.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:55.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:56.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.091920646s Jan 28 22:06:56.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:06:57.656: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:57.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:58.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.090904491s Jan 28 22:06:58.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:06:59.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:59.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:00.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 16.090075733s Jan 28 22:07:00.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:01.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:01.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:02.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 18.09005325s Jan 28 22:07:02.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:03.785: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:03.790: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:04.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 20.090293565s Jan 28 22:07:04.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:05.828: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:05.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:06.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 22.090626506s Jan 28 22:07:06.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:07.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:07.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:08.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 24.091070777s Jan 28 22:07:08.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:09.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:09.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:10.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 26.090762624s Jan 28 22:07:10.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:11.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:11.961: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:12.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 28.090673098s Jan 28 22:07:12.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:13.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:14.003: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:14.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 30.090920936s Jan 28 22:07:14.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:16.042: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:16.047: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:16.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 32.09114652s Jan 28 22:07:16.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:18.085: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:18.089: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:18.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 34.090006281s Jan 28 22:07:18.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:20.127: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:20.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:20.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 36.090861788s Jan 28 22:07:20.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:22.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:22.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:22.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 38.090462149s Jan 28 22:07:22.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:24.214: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:24.217: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:24.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 40.09151224s Jan 28 22:07:24.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:26.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:26.259: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:26.866: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 42.093724063s Jan 28 22:07:26.866: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:28.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:28.302: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:28.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 44.091547048s Jan 28 22:07:28.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:30.342: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:30.345: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:30.879: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 46.106752449s Jan 28 22:07:30.879: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:32.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:32.388: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-gw8s condition Ready to be true Jan 28 22:07:32.431: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:07:32.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 48.090241552s Jan 28 22:07:32.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:34.430: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:34.475: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:07:34.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 50.090824722s Jan 28 22:07:34.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:36.472: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-rndd condition Ready to be true Jan 28 22:07:36.514: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:07:36.517: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:36.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 52.090228889s Jan 28 22:07:36.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:38.557: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:07:38.560: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:38.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 54.090253951s Jan 28 22:07:38.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:40.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:07:40.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:40.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 56.09005209s Jan 28 22:07:40.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:42.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:07:42.647: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:42.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 58.090600395s Jan 28 22:07:42.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:44.685: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:07:44.689: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:44.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.090623968s Jan 28 22:07:44.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:46.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:07:46.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:46.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.090982358s Jan 28 22:07:46.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:48.773: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:07:48.774: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:48.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.091869781s Jan 28 22:07:48.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:50.816: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:07:50.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:50.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.091602406s Jan 28 22:07:50.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:52.861: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:07:52.861: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:52.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.090058374s Jan 28 22:07:52.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:54.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.090947182s Jan 28 22:07:54.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:54.907: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:07:54.907: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:56.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.091209601s Jan 28 22:07:56.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:56.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:56.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:07:58.867: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.09479064s Jan 28 22:07:58.867: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:58.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:07:58.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:08:00.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.090398882s Jan 28 22:08:00.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:01.045: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:08:01.045: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:08:02.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.09139885s Jan 28 22:08:02.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:03.090: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:08:03.090: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:08:04.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.090223521s Jan 28 22:08:04.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:05.136: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:08:05.137: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:08:06.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.090822852s Jan 28 22:08:06.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:07.184: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:08:07.184: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:08:07.184: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xkczn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:08:07.184: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:08:07.228: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s": Phase="Running", Reason="", readiness=false. Elapsed: 44.079022ms Jan 28 22:08:07.228: INFO: Pod "metadata-proxy-v0.1-xkczn": Phase="Running", Reason="", readiness=false. Elapsed: 44.135225ms Jan 28 22:08:07.229: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-gw8s' on 'bootstrap-e2e-minion-group-gw8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:07:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC }] Jan 28 22:08:07.229: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-xkczn' on 'bootstrap-e2e-minion-group-gw8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:07:30 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:08:05 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:08:08.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.090955183s Jan 28 22:08:08.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:09.228: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:08:09.273: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s": Phase="Running", Reason="", readiness=true. Elapsed: 2.088605948s Jan 28 22:08:09.273: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" satisfied condition "running and ready, or succeeded" Jan 28 22:08:09.273: INFO: Pod "metadata-proxy-v0.1-xkczn": Phase="Running", Reason="", readiness=true. Elapsed: 2.088695723s Jan 28 22:08:09.273: INFO: Pod "metadata-proxy-v0.1-xkczn" satisfied condition "running and ready, or succeeded" Jan 28 22:08:09.273: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:08:09.273: INFO: Reboot successful on node bootstrap-e2e-minion-group-gw8s Jan 28 22:08:10.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.091048629s Jan 28 22:08:10.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:11.287: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:08:11.287: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8gbc7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:08:11.287: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-rndd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:08:11.356: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=true. Elapsed: 69.172112ms Jan 28 22:08:11.356: INFO: Pod "metadata-proxy-v0.1-8gbc7" satisfied condition "running and ready, or succeeded" Jan 28 22:08:11.359: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=true. Elapsed: 71.607798ms Jan 28 22:08:11.359: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd" satisfied condition "running and ready, or succeeded" Jan 28 22:08:11.359: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:08:11.359: INFO: Reboot successful on node bootstrap-e2e-minion-group-rndd Jan 28 22:08:12.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.090230004s Jan 28 22:08:12.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:14.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.090459281s Jan 28 22:08:14.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:16.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.090601046s Jan 28 22:08:16.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:18.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.091764758s Jan 28 22:08:18.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:20.927: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.154863354s Jan 28 22:08:20.927: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:22.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.090333786s Jan 28 22:08:22.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:24.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.090173088s Jan 28 22:08:24.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:26.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.091102825s Jan 28 22:08:26.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:28.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.091577347s Jan 28 22:08:28.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:30.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.090929699s Jan 28 22:08:30.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:32.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.090144593s Jan 28 22:08:32.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:34.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.090721121s Jan 28 22:08:34.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:36.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.091273099s Jan 28 22:08:36.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:38.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.090308971s Jan 28 22:08:38.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:40.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.090108472s Jan 28 22:08:40.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:42.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.091938021s Jan 28 22:08:42.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:44.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.091855296s Jan 28 22:08:44.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:46.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.091031304s Jan 28 22:08:46.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:48.865: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.092759724s Jan 28 22:08:48.865: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:50.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.091385129s Jan 28 22:08:50.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:52.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.09022031s Jan 28 22:08:52.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:54.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.091645075s Jan 28 22:08:54.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:56.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.091212038s Jan 28 22:08:56.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:58.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.091792534s Jan 28 22:08:58.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:00.865: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.093504231s Jan 28 22:09:00.865: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:02.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.090792246s Jan 28 22:09:02.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:04.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.091493688s Jan 28 22:09:04.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:06.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.090865117s Jan 28 22:09:06.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:08.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.090245129s Jan 28 22:09:08.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:10.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.091314893s Jan 28 22:09:10.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:12.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.090264652s Jan 28 22:09:12.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:14.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.089965685s Jan 28 22:09:14.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:16.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.091045362s Jan 28 22:09:16.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:18.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.091146064s Jan 28 22:09:18.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:20.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.09006365s Jan 28 22:09:20.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:22.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.090142836s Jan 28 22:09:22.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:24.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.090472755s Jan 28 22:09:24.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:26.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.091114287s Jan 28 22:09:26.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:28.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.090810238s Jan 28 22:09:28.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:30.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.090219706s Jan 28 22:09:30.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:32.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.091629637s Jan 28 22:09:32.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:34.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.090184095s Jan 28 22:09:34.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:36.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.090508735s Jan 28 22:09:36.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:38.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.091875774s Jan 28 22:09:38.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:40.869: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.096970094s Jan 28 22:09:40.869: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:07.409: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.63714905s Jan 28 22:10:07.409: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:08.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.0902141s Jan 28 22:10:08.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:10.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.09159417s Jan 28 22:10:10.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:12.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.091283831s Jan 28 22:10:12.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:14.870: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.097688917s Jan 28 22:10:14.870: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:16.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.090886758s Jan 28 22:10:16.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:18.889: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.116658835s Jan 28 22:10:18.889: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:20.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.090355695s Jan 28 22:10:20.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:22.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.090009384s Jan 28 22:10:22.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:24.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.09037183s Jan 28 22:10:24.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:26.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.090502308s Jan 28 22:10:26.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:28.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.090283686s Jan 28 22:10:28.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:30.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.090898771s Jan 28 22:10:30.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:32.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.090676492s Jan 28 22:10:32.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:34.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.090547993s Jan 28 22:10:34.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:36.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.091082745s Jan 28 22:10:36.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:38.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.091804107s Jan 28 22:10:38.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:40.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.090237359s Jan 28 22:10:40.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:42.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.091250138s Jan 28 22:10:42.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:44.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.090517753s Jan 28 22:10:44.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:46.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.090415797s Jan 28 22:10:46.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:48.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.090348543s Jan 28 22:10:48.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:50.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.091900333s Jan 28 22:10:50.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:52.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.091208759s Jan 28 22:10:52.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:54.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.091989375s Jan 28 22:10:54.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:56.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.090851427s Jan 28 22:10:56.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:58.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.09041164s Jan 28 22:10:58.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:00.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.090652128s Jan 28 22:11:00.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:02.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.091847762s Jan 28 22:11:02.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:04.906: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.133985519s Jan 28 22:11:04.906: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:06.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.091155042s Jan 28 22:11:06.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:08.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.090964718s Jan 28 22:11:08.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:10.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.090237749s Jan 28 22:11:10.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:12.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.090578745s Jan 28 22:11:12.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:14.879: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.107440134s Jan 28 22:11:14.879: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:16.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.090957755s Jan 28 22:11:16.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:18.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.090424637s Jan 28 22:11:18.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:20.880: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.108535778s Jan 28 22:11:20.880: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:22.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.091518159s Jan 28 22:11:22.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:24.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.092253845s Jan 28 22:11:24.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:26.870: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.098323928s Jan 28 22:11:26.870: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:28.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.090613627s Jan 28 22:11:28.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:30.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.092235941s Jan 28 22:11:30.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:32.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.090221708s Jan 28 22:11:32.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:34.861: INFO: Encountered non-retryable error while getting pod kube-system/kube-dns-autoscaler-5f6455f985-rtgpq: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-rtgpq": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:34.861: INFO: Pod kube-dns-autoscaler-5f6455f985-rtgpq failed to be running and ready, or succeeded. Jan 28 22:11:34.861: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-dns-autoscaler-5f6455f985-rtgpq kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0] Jan 28 22:11:34.861: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-rtgpq: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:05:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:05:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:38 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP: PodIPs:[] StartTime:2023-01-28 21:53:38 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:&ContainerStateWaiting{Reason:,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:3 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://327aa9b55c426f26dbce218ae381d10dc0d1de28e736fd47f30215df0e91d6b7 Started:0xc00344f1da}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 22:11:34.900: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-rtgpq/autoscaler, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-rtgpq/log?container=autoscaler&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 22:11:34.900: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-rtgpq/autoscaler, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-rtgpq/log?container=autoscaler&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 22:11:34.900: INFO: Node bootstrap-e2e-minion-group-jdvv failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 22:11:34.9 < Exit [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/28/23 22:11:34.901 (4m50.367s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:11:34.901 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 22:11:34.901 Jan 28 22:11:34.940: INFO: Unexpected error: <*url.Error | 0xc002676000>: { Op: "Get", URL: "https://35.230.109.193/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc003f8c1e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0027e2540>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 109, 193], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00020e0a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.230.109.193/api/v1/namespaces/kube-system/events": dial tcp 35.230.109.193:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 22:11:34.94 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:11:34.94 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 22:11:34.94 Jan 28 22:11:34.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 22:11:34.98 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 22:11:34.98 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 22:11:34.98 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 22:11:34.98 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 22:11:34.98 STEP: Collecting events from namespace "reboot-342". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 22:11:34.98 Jan 28 22:11:35.020: INFO: Unexpected error: failed to list events in namespace "reboot-342": <*url.Error | 0xc0027e2570>: { Op: "Get", URL: "https://35.230.109.193/api/v1/namespaces/reboot-342/events", Err: <*net.OpError | 0xc004c81270>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00373a780>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 109, 193], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001323200>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 22:11:35.02 (40ms) [FAILED] failed to list events in namespace "reboot-342": Get "https://35.230.109.193/api/v1/namespaces/reboot-342/events": dial tcp 35.230.109.193:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/28/23 22:11:35.02 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 22:11:35.02 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 22:11:35.02 STEP: Destroying namespace "reboot-342" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 22:11:35.021 [FAILED] Couldn't delete ns: "reboot-342": Delete "https://35.230.109.193/api/v1/namespaces/reboot-342": dial tcp 35.230.109.193:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.109.193/api/v1/namespaces/reboot-342", Err:(*net.OpError)(0xc003f8d3b0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/28/23 22:11:35.061 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 22:11:35.061 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 22:11:35.061 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 22:11:35.061 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 22:11:34.9 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 22:06:44.239 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 22:06:44.239 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 22:06:44.239 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 22:06:44.239 Jan 28 22:06:44.239: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 22:06:44.24 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 22:06:44.369 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 22:06:44.45 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 22:06:44.533 (294ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 22:06:44.533 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 22:06:44.533 (0s) > Enter [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/28/23 22:06:44.533 Jan 28 22:06:44.680: INFO: Getting bootstrap-e2e-minion-group-gw8s Jan 28 22:06:44.680: INFO: Getting bootstrap-e2e-minion-group-rndd Jan 28 22:06:44.680: INFO: Getting bootstrap-e2e-minion-group-jdvv Jan 28 22:06:44.726: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-jdvv condition Ready to be true Jan 28 22:06:44.726: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-rndd condition Ready to be true Jan 28 22:06:44.726: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-gw8s condition Ready to be true Jan 28 22:06:44.772: INFO: Node bootstrap-e2e-minion-group-rndd has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8gbc7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:06:44.772: INFO: Node bootstrap-e2e-minion-group-jdvv has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-rtgpq kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0] Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-rtgpq kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0] Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-rndd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-rtgpq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:06:44.772: INFO: Node bootstrap-e2e-minion-group-gw8s has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xkczn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:06:44.772: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xp6b5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:06:44.817: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=true. Elapsed: 45.623519ms Jan 28 22:06:44.818: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd" satisfied condition "running and ready, or succeeded" Jan 28 22:06:44.818: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=true. Elapsed: 45.84245ms Jan 28 22:06:44.818: INFO: Pod "metadata-proxy-v0.1-8gbc7" satisfied condition "running and ready, or succeeded" Jan 28 22:06:44.818: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:06:44.818: INFO: Getting external IP address for bootstrap-e2e-minion-group-rndd Jan 28 22:06:44.818: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-rndd(34.145.37.78:22) Jan 28 22:06:44.820: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 48.461067ms Jan 28 22:06:44.820: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:06:44.821: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 48.714667ms Jan 28 22:06:44.821: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 22:06:44.821: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s": Phase="Running", Reason="", readiness=true. Elapsed: 49.164952ms Jan 28 22:06:44.821: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" satisfied condition "running and ready, or succeeded" Jan 28 22:06:44.821: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=true. Elapsed: 49.293961ms Jan 28 22:06:44.821: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" satisfied condition "running and ready, or succeeded" Jan 28 22:06:44.822: INFO: Pod "metadata-proxy-v0.1-xp6b5": Phase="Running", Reason="", readiness=true. Elapsed: 49.419454ms Jan 28 22:06:44.822: INFO: Pod "metadata-proxy-v0.1-xp6b5" satisfied condition "running and ready, or succeeded" Jan 28 22:06:44.822: INFO: Pod "metadata-proxy-v0.1-xkczn": Phase="Running", Reason="", readiness=true. Elapsed: 49.642505ms Jan 28 22:06:44.822: INFO: Pod "metadata-proxy-v0.1-xkczn" satisfied condition "running and ready, or succeeded" Jan 28 22:06:44.822: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:06:44.822: INFO: Getting external IP address for bootstrap-e2e-minion-group-gw8s Jan 28 22:06:44.822: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-gw8s(34.105.20.128:22) Jan 28 22:06:45.345: INFO: ssh prow@34.145.37.78:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 28 22:06:45.345: INFO: ssh prow@34.145.37.78:22: stdout: "" Jan 28 22:06:45.345: INFO: ssh prow@34.145.37.78:22: stderr: "" Jan 28 22:06:45.345: INFO: ssh prow@34.145.37.78:22: exit code: 0 Jan 28 22:06:45.345: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-rndd condition Ready to be false Jan 28 22:06:45.358: INFO: ssh prow@34.105.20.128:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 28 22:06:45.358: INFO: ssh prow@34.105.20.128:22: stdout: "" Jan 28 22:06:45.358: INFO: ssh prow@34.105.20.128:22: stderr: "" Jan 28 22:06:45.358: INFO: ssh prow@34.105.20.128:22: exit code: 0 Jan 28 22:06:45.358: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-gw8s condition Ready to be false Jan 28 22:06:45.387: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:45.400: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:46.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090751852s Jan 28 22:06:46.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:06:47.431: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:47.443: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:48.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091573882s Jan 28 22:06:48.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:06:49.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:49.488: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:50.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091810249s Jan 28 22:06:50.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:06:51.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:51.531: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:52.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09092878s Jan 28 22:06:52.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:06:53.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:53.574: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:54.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.090092486s Jan 28 22:06:54.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:06:55.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:55.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:56.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.091920646s Jan 28 22:06:56.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:06:57.656: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:57.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:58.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.090904491s Jan 28 22:06:58.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:06:59.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:06:59.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:00.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 16.090075733s Jan 28 22:07:00.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:01.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:01.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:02.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 18.09005325s Jan 28 22:07:02.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:03.785: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:03.790: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:04.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 20.090293565s Jan 28 22:07:04.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:05.828: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:05.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:06.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 22.090626506s Jan 28 22:07:06.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:07.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:07.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:08.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 24.091070777s Jan 28 22:07:08.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:09.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:09.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:10.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 26.090762624s Jan 28 22:07:10.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:11.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:11.961: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:12.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 28.090673098s Jan 28 22:07:12.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:13.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:14.003: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:14.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 30.090920936s Jan 28 22:07:14.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:16.042: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:16.047: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:16.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 32.09114652s Jan 28 22:07:16.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:18.085: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:18.089: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:18.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 34.090006281s Jan 28 22:07:18.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:20.127: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:20.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:20.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 36.090861788s Jan 28 22:07:20.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:22.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:22.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:22.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 38.090462149s Jan 28 22:07:22.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:24.214: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:24.217: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:24.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 40.09151224s Jan 28 22:07:24.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:26.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:26.259: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:26.866: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 42.093724063s Jan 28 22:07:26.866: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:28.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:28.302: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:28.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 44.091547048s Jan 28 22:07:28.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:30.342: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:30.345: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:30.879: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 46.106752449s Jan 28 22:07:30.879: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:32.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:32.388: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-gw8s condition Ready to be true Jan 28 22:07:32.431: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:07:32.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 48.090241552s Jan 28 22:07:32.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:34.430: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:07:34.475: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:07:34.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 50.090824722s Jan 28 22:07:34.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:36.472: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-rndd condition Ready to be true Jan 28 22:07:36.514: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:07:36.517: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:36.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 52.090228889s Jan 28 22:07:36.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:38.557: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:07:38.560: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:38.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 54.090253951s Jan 28 22:07:38.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:40.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:07:40.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:40.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 56.09005209s Jan 28 22:07:40.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:42.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:07:42.647: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:42.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 58.090600395s Jan 28 22:07:42.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:44.685: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:07:44.689: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:44.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.090623968s Jan 28 22:07:44.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:46.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:07:46.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:46.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.090982358s Jan 28 22:07:46.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:48.773: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:07:48.774: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:48.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.091869781s Jan 28 22:07:48.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:50.816: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:07:50.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:50.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.091602406s Jan 28 22:07:50.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:52.861: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:07:52.861: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:52.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.090058374s Jan 28 22:07:52.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:54.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.090947182s Jan 28 22:07:54.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:54.907: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:07:54.907: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:56.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.091209601s Jan 28 22:07:56.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:56.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:07:56.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:07:58.867: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.09479064s Jan 28 22:07:58.867: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:07:58.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:07:58.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:08:00.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.090398882s Jan 28 22:08:00.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:01.045: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:08:01.045: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:08:02.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.09139885s Jan 28 22:08:02.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:03.090: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:30 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:08:03.090: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:08:04.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.090223521s Jan 28 22:08:04.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:05.136: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:07:35 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:08:05.137: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:35 +0000 UTC}]. Failure Jan 28 22:08:06.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.090822852s Jan 28 22:08:06.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:07.184: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:08:07.184: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:08:07.184: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xkczn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:08:07.184: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:08:07.228: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s": Phase="Running", Reason="", readiness=false. Elapsed: 44.079022ms Jan 28 22:08:07.228: INFO: Pod "metadata-proxy-v0.1-xkczn": Phase="Running", Reason="", readiness=false. Elapsed: 44.135225ms Jan 28 22:08:07.229: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-gw8s' on 'bootstrap-e2e-minion-group-gw8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:07:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC }] Jan 28 22:08:07.229: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-xkczn' on 'bootstrap-e2e-minion-group-gw8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:07:30 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:08:05 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:08:08.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.090955183s Jan 28 22:08:08.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:09.228: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 22:07:45 +0000 UTC}]. Failure Jan 28 22:08:09.273: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s": Phase="Running", Reason="", readiness=true. Elapsed: 2.088605948s Jan 28 22:08:09.273: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" satisfied condition "running and ready, or succeeded" Jan 28 22:08:09.273: INFO: Pod "metadata-proxy-v0.1-xkczn": Phase="Running", Reason="", readiness=true. Elapsed: 2.088695723s Jan 28 22:08:09.273: INFO: Pod "metadata-proxy-v0.1-xkczn" satisfied condition "running and ready, or succeeded" Jan 28 22:08:09.273: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:08:09.273: INFO: Reboot successful on node bootstrap-e2e-minion-group-gw8s Jan 28 22:08:10.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.091048629s Jan 28 22:08:10.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:11.287: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:08:11.287: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8gbc7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:08:11.287: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-rndd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:08:11.356: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=true. Elapsed: 69.172112ms Jan 28 22:08:11.356: INFO: Pod "metadata-proxy-v0.1-8gbc7" satisfied condition "running and ready, or succeeded" Jan 28 22:08:11.359: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=true. Elapsed: 71.607798ms Jan 28 22:08:11.359: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd" satisfied condition "running and ready, or succeeded" Jan 28 22:08:11.359: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:08:11.359: INFO: Reboot successful on node bootstrap-e2e-minion-group-rndd Jan 28 22:08:12.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.090230004s Jan 28 22:08:12.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:14.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.090459281s Jan 28 22:08:14.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:16.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.090601046s Jan 28 22:08:16.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:18.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.091764758s Jan 28 22:08:18.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:20.927: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.154863354s Jan 28 22:08:20.927: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:22.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.090333786s Jan 28 22:08:22.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:24.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.090173088s Jan 28 22:08:24.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:26.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.091102825s Jan 28 22:08:26.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:28.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.091577347s Jan 28 22:08:28.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:30.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.090929699s Jan 28 22:08:30.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:32.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.090144593s Jan 28 22:08:32.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:34.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.090721121s Jan 28 22:08:34.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:36.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.091273099s Jan 28 22:08:36.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:38.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.090308971s Jan 28 22:08:38.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:40.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.090108472s Jan 28 22:08:40.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:42.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.091938021s Jan 28 22:08:42.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:44.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.091855296s Jan 28 22:08:44.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:46.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.091031304s Jan 28 22:08:46.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:48.865: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.092759724s Jan 28 22:08:48.865: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:50.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.091385129s Jan 28 22:08:50.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:52.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.09022031s Jan 28 22:08:52.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:54.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.091645075s Jan 28 22:08:54.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:56.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.091212038s Jan 28 22:08:56.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:08:58.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.091792534s Jan 28 22:08:58.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:00.865: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.093504231s Jan 28 22:09:00.865: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:02.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.090792246s Jan 28 22:09:02.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:04.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.091493688s Jan 28 22:09:04.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:06.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.090865117s Jan 28 22:09:06.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:08.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.090245129s Jan 28 22:09:08.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:10.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.091314893s Jan 28 22:09:10.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:12.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.090264652s Jan 28 22:09:12.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:14.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.089965685s Jan 28 22:09:14.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:16.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.091045362s Jan 28 22:09:16.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:18.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.091146064s Jan 28 22:09:18.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:20.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.09006365s Jan 28 22:09:20.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:22.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.090142836s Jan 28 22:09:22.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:24.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.090472755s Jan 28 22:09:24.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:26.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.091114287s Jan 28 22:09:26.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:28.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.090810238s Jan 28 22:09:28.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:30.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.090219706s Jan 28 22:09:30.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:32.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.091629637s Jan 28 22:09:32.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:34.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.090184095s Jan 28 22:09:34.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:36.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.090508735s Jan 28 22:09:36.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:38.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.091875774s Jan 28 22:09:38.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:09:40.869: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.096970094s Jan 28 22:09:40.869: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:07.409: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.63714905s Jan 28 22:10:07.409: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:08.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.0902141s Jan 28 22:10:08.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:10.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.09159417s Jan 28 22:10:10.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:12.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.091283831s Jan 28 22:10:12.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:14.870: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.097688917s Jan 28 22:10:14.870: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:16.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.090886758s Jan 28 22:10:16.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:18.889: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.116658835s Jan 28 22:10:18.889: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:20.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.090355695s Jan 28 22:10:20.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:22.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.090009384s Jan 28 22:10:22.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:24.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.09037183s Jan 28 22:10:24.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:26.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.090502308s Jan 28 22:10:26.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:28.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.090283686s Jan 28 22:10:28.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:30.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.090898771s Jan 28 22:10:30.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:32.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.090676492s Jan 28 22:10:32.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:34.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.090547993s Jan 28 22:10:34.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:36.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.091082745s Jan 28 22:10:36.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:38.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.091804107s Jan 28 22:10:38.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:40.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.090237359s Jan 28 22:10:40.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:42.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.091250138s Jan 28 22:10:42.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:44.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.090517753s Jan 28 22:10:44.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:46.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.090415797s Jan 28 22:10:46.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:48.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.090348543s Jan 28 22:10:48.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:50.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.091900333s Jan 28 22:10:50.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:52.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.091208759s Jan 28 22:10:52.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:54.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.091989375s Jan 28 22:10:54.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:56.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.090851427s Jan 28 22:10:56.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:10:58.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.09041164s Jan 28 22:10:58.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:00.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.090652128s Jan 28 22:11:00.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:02.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.091847762s Jan 28 22:11:02.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:04.906: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.133985519s Jan 28 22:11:04.906: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:06.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.091155042s Jan 28 22:11:06.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:08.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.090964718s Jan 28 22:11:08.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:10.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.090237749s Jan 28 22:11:10.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:12.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.090578745s Jan 28 22:11:12.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:14.879: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.107440134s Jan 28 22:11:14.879: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:16.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.090957755s Jan 28 22:11:16.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:18.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.090424637s Jan 28 22:11:18.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:20.880: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.108535778s Jan 28 22:11:20.880: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:22.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.091518159s Jan 28 22:11:22.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:24.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.092253845s Jan 28 22:11:24.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:26.870: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.098323928s Jan 28 22:11:26.870: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:28.863: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.090613627s Jan 28 22:11:28.863: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:30.864: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.092235941s Jan 28 22:11:30.864: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:32.862: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.090221708s Jan 28 22:11:32.862: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-rtgpq' on 'bootstrap-e2e-minion-group-jdvv' to be 'Running' but was 'Pending' Jan 28 22:11:34.861: INFO: Encountered non-retryable error while getting pod kube-system/kube-dns-autoscaler-5f6455f985-rtgpq: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-rtgpq": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 22:11:34.861: INFO: Pod kube-dns-autoscaler-5f6455f985-rtgpq failed to be running and ready, or succeeded. Jan 28 22:11:34.861: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-dns-autoscaler-5f6455f985-rtgpq kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0] Jan 28 22:11:34.861: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-rtgpq: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:05:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 22:05:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:38 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP: PodIPs:[] StartTime:2023-01-28 21:53:38 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:&ContainerStateWaiting{Reason:,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:3 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://327aa9b55c426f26dbce218ae381d10dc0d1de28e736fd47f30215df0e91d6b7 Started:0xc00344f1da}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 22:11:34.900: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-rtgpq/autoscaler, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-rtgpq/log?container=autoscaler&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 22:11:34.900: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-rtgpq/autoscaler, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-rtgpq/log?container=autoscaler&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 22:11:34.900: INFO: Node bootstrap-e2e-minion-group-jdvv failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 22:11:34.9 < Exit [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/28/23 22:11:34.901 (4m50.367s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:11:34.901 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 22:11:34.901 Jan 28 22:11:34.940: INFO: Unexpected error: <*url.Error | 0xc002676000>: { Op: "Get", URL: "https://35.230.109.193/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc003f8c1e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0027e2540>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 109, 193], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00020e0a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.230.109.193/api/v1/namespaces/kube-system/events": dial tcp 35.230.109.193:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 22:11:34.94 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:11:34.94 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 22:11:34.94 Jan 28 22:11:34.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 22:11:34.98 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 22:11:34.98 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 22:11:34.98 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 22:11:34.98 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 22:11:34.98 STEP: Collecting events from namespace "reboot-342". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 22:11:34.98 Jan 28 22:11:35.020: INFO: Unexpected error: failed to list events in namespace "reboot-342": <*url.Error | 0xc0027e2570>: { Op: "Get", URL: "https://35.230.109.193/api/v1/namespaces/reboot-342/events", Err: <*net.OpError | 0xc004c81270>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00373a780>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 109, 193], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001323200>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 22:11:35.02 (40ms) [FAILED] failed to list events in namespace "reboot-342": Get "https://35.230.109.193/api/v1/namespaces/reboot-342/events": dial tcp 35.230.109.193:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/28/23 22:11:35.02 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 22:11:35.02 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 22:11:35.02 STEP: Destroying namespace "reboot-342" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 22:11:35.021 [FAILED] Couldn't delete ns: "reboot-342": Delete "https://35.230.109.193/api/v1/namespaces/reboot-342": dial tcp 35.230.109.193:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.109.193/api/v1/namespaces/reboot-342", Err:(*net.OpError)(0xc003f8d3b0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/28/23 22:11:35.061 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 22:11:35.061 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 22:11:35.061 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 22:11:35.061 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sswitching\soff\sthe\snetwork\sinterface\sand\sensure\sthey\sfunction\supon\sswitch\son$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 21:59:42.411 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:56:27.945 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:56:27.945 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:56:27.945 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 21:56:27.946 Jan 28 21:56:27.946: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 21:56:27.947 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 21:56:28.077 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 21:56:28.158 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:56:28.24 (295ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 21:56:28.24 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 21:56:28.24 (0s) > Enter [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/28/23 21:56:28.24 Jan 28 21:56:28.336: INFO: Getting bootstrap-e2e-minion-group-rndd Jan 28 21:56:28.378: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-rndd condition Ready to be true Jan 28 21:56:28.387: INFO: Getting bootstrap-e2e-minion-group-gw8s Jan 28 21:56:28.387: INFO: Getting bootstrap-e2e-minion-group-jdvv Jan 28 21:56:28.420: INFO: Node bootstrap-e2e-minion-group-rndd has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 21:56:28.420: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 21:56:28.420: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8gbc7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:56:28.420: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-rndd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:56:28.430: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-gw8s condition Ready to be true Jan 28 21:56:28.430: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-jdvv condition Ready to be true Jan 28 21:56:28.463: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=true. Elapsed: 42.83729ms Jan 28 21:56:28.463: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=true. Elapsed: 42.749912ms Jan 28 21:56:28.463: INFO: Pod "metadata-proxy-v0.1-8gbc7" satisfied condition "running and ready, or succeeded" Jan 28 21:56:28.463: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd" satisfied condition "running and ready, or succeeded" Jan 28 21:56:28.463: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 21:56:28.463: INFO: Getting external IP address for bootstrap-e2e-minion-group-rndd Jan 28 21:56:28.463: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-rndd(34.145.37.78:22) Jan 28 21:56:28.473: INFO: Node bootstrap-e2e-minion-group-gw8s has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 21:56:28.473: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 21:56:28.473: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xkczn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:56:28.473: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:56:28.473: INFO: Node bootstrap-e2e-minion-group-jdvv has 4 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-rtgpq] Jan 28 21:56:28.473: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-rtgpq] Jan 28 21:56:28.473: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-rtgpq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:56:28.473: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:56:28.473: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xp6b5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:56:28.473: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:56:28.517: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s": Phase="Running", Reason="", readiness=true. Elapsed: 43.996436ms Jan 28 21:56:28.517: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" satisfied condition "running and ready, or succeeded" Jan 28 21:56:28.518: INFO: Pod "metadata-proxy-v0.1-xkczn": Phase="Running", Reason="", readiness=true. Elapsed: 45.757976ms Jan 28 21:56:28.518: INFO: Pod "metadata-proxy-v0.1-xkczn" satisfied condition "running and ready, or succeeded" Jan 28 21:56:28.518: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 21:56:28.518: INFO: Getting external IP address for bootstrap-e2e-minion-group-gw8s Jan 28 21:56:28.518: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-gw8s(34.105.20.128:22) Jan 28 21:56:28.520: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 46.554567ms Jan 28 21:56:28.520: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 21:56:28.520: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Running", Reason="", readiness=true. Elapsed: 47.328453ms Jan 28 21:56:28.520: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq" satisfied condition "running and ready, or succeeded" Jan 28 21:56:28.521: INFO: Pod "metadata-proxy-v0.1-xp6b5": Phase="Running", Reason="", readiness=true. Elapsed: 47.842257ms Jan 28 21:56:28.521: INFO: Pod "metadata-proxy-v0.1-xp6b5" satisfied condition "running and ready, or succeeded" Jan 28 21:56:28.521: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=true. Elapsed: 48.00027ms Jan 28 21:56:28.521: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" satisfied condition "running and ready, or succeeded" Jan 28 21:56:28.521: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-rtgpq] Jan 28 21:56:28.521: INFO: Getting external IP address for bootstrap-e2e-minion-group-jdvv Jan 28 21:56:28.521: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-jdvv(34.127.24.56:22) Jan 28 21:56:28.986: INFO: ssh prow@34.145.37.78:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 28 21:56:28.986: INFO: ssh prow@34.145.37.78:22: stdout: "" Jan 28 21:56:28.986: INFO: ssh prow@34.145.37.78:22: stderr: "" Jan 28 21:56:28.986: INFO: ssh prow@34.145.37.78:22: exit code: 0 Jan 28 21:56:28.986: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-rndd condition Ready to be false Jan 28 21:56:29.029: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:29.045: INFO: ssh prow@34.127.24.56:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 28 21:56:29.045: INFO: ssh prow@34.127.24.56:22: stdout: "" Jan 28 21:56:29.045: INFO: ssh prow@34.127.24.56:22: stderr: "" Jan 28 21:56:29.045: INFO: ssh prow@34.127.24.56:22: exit code: 0 Jan 28 21:56:29.045: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-jdvv condition Ready to be false Jan 28 21:56:29.045: INFO: ssh prow@34.105.20.128:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 28 21:56:29.045: INFO: ssh prow@34.105.20.128:22: stdout: "" Jan 28 21:56:29.045: INFO: ssh prow@34.105.20.128:22: stderr: "" Jan 28 21:56:29.045: INFO: ssh prow@34.105.20.128:22: exit code: 0 Jan 28 21:56:29.045: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-gw8s condition Ready to be false Jan 28 21:56:29.098: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:29.098: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:31.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:31.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:31.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:33.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:33.186: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:33.186: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:35.158: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:35.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:35.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:37.205: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:37.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:37.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:39.249: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:39.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:39.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:41.291: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:41.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:41.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:43.336: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:43.405: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:43.405: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:45.379: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:45.449: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:45.449: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:47.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:47.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:47.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:49.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:49.537: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:49.537: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:51.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:51.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:51.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:53.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:53.625: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:53.625: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:55.595: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:55.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:55.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:57.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:57.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:57.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:59.681: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:59.759: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:59.759: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:01.724: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:01.803: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:01.803: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:03.769: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:03.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:03.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:05.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:05.890: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:05.890: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:07.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:08.053: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:08.053: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:09.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:10.097: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:10.097: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:12.041: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:12.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:12.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:14.085: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:14.187: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:14.187: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:16.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:16.231: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:16.231: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:18.173: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-rndd condition Ready to be true Jan 28 21:57:18.215: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:18.276: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-gw8s condition Ready to be true Jan 28 21:57:18.276: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:18.318: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:20.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:20.319: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:20.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:22.303: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:22.364: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-jdvv condition Ready to be true Jan 28 21:57:22.405: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:22.406: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:24.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:24.451: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:24.451: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:26.393: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:26.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:26.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:28.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:28.540: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:28.540: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:30.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:30.584: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:30.584: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:32.522: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:32.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:32.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:34.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:34.673: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:34.673: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:36.608: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:36.718: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:36.718: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:38.653: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:38.763: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:38.763: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:40.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:40.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:40.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:42.764: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:42.851: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:42.851: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:44.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:44.896: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:44.896: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:46.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:46.940: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:46.940: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:48.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:48.984: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:48.984: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:50.937: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:51.028: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:51.028: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:52.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:53.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:53.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:55.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:55.119: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:55.119: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:57.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:57.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:57.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:59.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:59.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:59.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:01.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:01.256: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:01.256: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:03.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:03.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:03.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:05.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:05.345: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:05.345: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:07.298: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:07.390: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:07.390: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:09.340: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:09.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:09.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:11.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:11.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:11.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:13.430: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:13.524: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:13.524: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:15.475: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:15.568: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:15.568: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:17.517: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:17.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:17.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:19.560: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:19.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:19.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:21.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:21.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:21.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:23.648: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:23.748: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:23.748: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:25.693: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:25.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:25.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:27.736: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:27.836: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:27.836: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:29.778: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:29.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:29.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:31.822: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:31.924: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:31.924: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:33.866: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:33.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:33.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:35.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:36.012: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:36.012: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:37.969: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:38.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:38.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:40.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:40.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:40.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:42.058: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 21:58:42.058: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8gbc7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:58:42.059: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-rndd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:58:42.102: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 43.856067ms Jan 28 21:58:42.102: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 43.730427ms Jan 28 21:58:42.102: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:42.102: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:42.149: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:42.149: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:44.147: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 2.088303079s Jan 28 21:58:44.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:44.147: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 2.088265733s Jan 28 21:58:44.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:44.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:44.201: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:46.147: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 4.088181682s Jan 28 21:58:46.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:46.147: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 4.088063457s Jan 28 21:58:46.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:46.243: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:46.244: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-rtgpq] Jan 28 21:58:46.244: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-rtgpq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:58:46.244: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xp6b5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:58:46.244: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:58:46.245: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:58:46.290: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Running", Reason="", readiness=true. Elapsed: 45.877861ms Jan 28 21:58:46.290: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq" satisfied condition "running and ready, or succeeded" Jan 28 21:58:46.291: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.71068ms Jan 28 21:58:46.291: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:58:46.292: INFO: Pod "metadata-proxy-v0.1-xp6b5": Phase="Running", Reason="", readiness=false. Elapsed: 47.176651ms Jan 28 21:58:46.292: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-xp6b5' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 21:58:46.292: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=true. Elapsed: 47.29268ms Jan 28 21:58:46.292: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" satisfied condition "running and ready, or succeeded" Jan 28 21:58:48.146: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 6.087886017s Jan 28 21:58:48.146: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 6.087723959s Jan 28 21:58:48.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:48.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:48.287: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 21:58:48.287: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xkczn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:58:48.287: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:58:48.331: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s": Phase="Running", Reason="", readiness=true. Elapsed: 43.385731ms Jan 28 21:58:48.331: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" satisfied condition "running and ready, or succeeded" Jan 28 21:58:48.331: INFO: Pod "metadata-proxy-v0.1-xkczn": Phase="Running", Reason="", readiness=true. Elapsed: 43.462632ms Jan 28 21:58:48.331: INFO: Pod "metadata-proxy-v0.1-xkczn" satisfied condition "running and ready, or succeeded" Jan 28 21:58:48.331: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 21:58:48.331: INFO: Reboot successful on node bootstrap-e2e-minion-group-gw8s Jan 28 21:58:48.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.089661271s Jan 28 21:58:48.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:58:48.335: INFO: Pod "metadata-proxy-v0.1-xp6b5": Phase="Running", Reason="", readiness=false. Elapsed: 2.090908203s Jan 28 21:58:48.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-xp6b5' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 21:58:50.147: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 8.088276558s Jan 28 21:58:50.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:50.147: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 8.088323691s Jan 28 21:58:50.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:50.335: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.090844986s Jan 28 21:58:50.335: INFO: Pod "metadata-proxy-v0.1-xp6b5": Phase="Running", Reason="", readiness=false. Elapsed: 4.090948605s Jan 28 21:58:50.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:58:50.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-xp6b5' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 21:58:52.147: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 10.088478383s Jan 28 21:58:52.147: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 10.088645728s Jan 28 21:58:52.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:52.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:52.335: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.090242684s Jan 28 21:58:52.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:58:52.336: INFO: Pod "metadata-proxy-v0.1-xp6b5": Phase="Running", Reason="", readiness=true. Elapsed: 6.091518323s Jan 28 21:58:52.336: INFO: Pod "metadata-proxy-v0.1-xp6b5" satisfied condition "running and ready, or succeeded" Jan 28 21:58:54.148: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 12.089605017s Jan 28 21:58:54.148: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 12.08945048s Jan 28 21:58:54.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:54.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:54.335: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.090152426s Jan 28 21:58:54.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:58:56.163: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 14.104685911s Jan 28 21:58:56.163: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:56.163: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 14.104936407s Jan 28 21:58:56.163: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:56.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.088098463s Jan 28 21:58:56.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:58:58.147: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 16.088195899s Jan 28 21:58:58.147: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 16.08835885s Jan 28 21:58:58.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:58.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:58.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.089195576s Jan 28 21:58:58.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:00.146: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 18.087615169s Jan 28 21:59:00.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:00.146: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 18.087839895s Jan 28 21:59:00.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:00.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.089451391s Jan 28 21:59:00.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:02.147: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 20.088463472s Jan 28 21:59:02.147: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 20.088639931s Jan 28 21:59:02.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:02.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:02.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.089067849s Jan 28 21:59:02.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:04.147: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 22.088006388s Jan 28 21:59:04.147: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 22.088166172s Jan 28 21:59:04.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:04.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:04.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.089919292s Jan 28 21:59:04.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:06.148: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 24.089390693s Jan 28 21:59:06.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:06.148: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 24.089822251s Jan 28 21:59:06.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:06.335: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.090545388s Jan 28 21:59:06.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:08.185: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 26.12695643s Jan 28 21:59:08.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:08.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 26.126918912s Jan 28 21:59:08.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:08.335: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.090797111s Jan 28 21:59:08.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:10.148: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 28.089160884s Jan 28 21:59:10.148: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 28.089348437s Jan 28 21:59:10.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:10.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:10.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.088719097s Jan 28 21:59:10.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:12.192: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 30.13340568s Jan 28 21:59:12.192: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 30.133585288s Jan 28 21:59:12.192: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:12.192: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:12.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.089098804s Jan 28 21:59:12.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:14.175: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 32.116929463s Jan 28 21:59:14.176: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:14.177: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 32.118409398s Jan 28 21:59:14.177: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:14.336: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.091554302s Jan 28 21:59:14.336: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:16.147: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 34.088212309s Jan 28 21:59:16.147: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 34.088056248s Jan 28 21:59:16.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:16.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:16.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.088797902s Jan 28 21:59:16.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:18.149: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 36.090363849s Jan 28 21:59:18.149: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:18.149: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 36.090262748s Jan 28 21:59:18.149: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:18.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.089283352s Jan 28 21:59:18.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:20.146: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 38.087755088s Jan 28 21:59:20.146: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 38.087925995s Jan 28 21:59:20.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:20.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:20.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.088431386s Jan 28 21:59:20.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:22.148: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 40.088970483s Jan 28 21:59:22.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:22.148: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 40.089264616s Jan 28 21:59:22.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:22.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.089972185s Jan 28 21:59:22.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:24.148: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 42.08988853s Jan 28 21:59:24.148: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 42.090056899s Jan 28 21:59:24.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:24.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:24.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.08966298s Jan 28 21:59:24.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:26.146: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 44.087576929s Jan 28 21:59:26.146: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 44.087777633s Jan 28 21:59:26.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:26.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:26.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.088623072s Jan 28 21:59:26.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:28.145: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 46.087057472s Jan 28 21:59:28.146: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 46.087014641s Jan 28 21:59:28.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:28.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:28.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.088905207s Jan 28 21:59:28.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:30.146: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 48.087778399s Jan 28 21:59:30.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:30.146: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 48.088008963s Jan 28 21:59:30.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:30.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.088339021s Jan 28 21:59:30.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:32.146: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=true. Elapsed: 50.087793426s Jan 28 21:59:32.146: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd" satisfied condition "running and ready, or succeeded" Jan 28 21:59:32.148: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=true. Elapsed: 50.089482879s Jan 28 21:59:32.148: INFO: Pod "metadata-proxy-v0.1-8gbc7" satisfied condition "running and ready, or succeeded" Jan 28 21:59:32.148: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 21:59:32.148: INFO: Reboot successful on node bootstrap-e2e-minion-group-rndd Jan 28 21:59:32.335: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.09049544s Jan 28 21:59:32.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:34.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.08929958s Jan 28 21:59:34.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:36.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.088563756s Jan 28 21:59:36.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:38.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.089096947s Jan 28 21:59:38.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:40.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.088840616s Jan 28 21:59:40.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:42.332: INFO: Encountered non-retryable error while getting pod kube-system/volume-snapshot-controller-0: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:42.332: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 28 21:59:42.332: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-rtgpq] Jan 28 21:59:42.332: INFO: Status for not ready pod kube-system/kube-proxy-bootstrap-e2e-minion-group-jdvv: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:22 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:59:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:59:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:22 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP:10.138.0.3 PodIPs:[{IP:10.138.0.3}] StartTime:2023-01-28 21:53:22 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 40s restarting failed container=kube-proxy pod=kube-proxy-bootstrap-e2e-minion-group-jdvv_kube-system(e126030fe08b481bd93bca8e2433b514),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 21:57:46 +0000 UTC,FinishedAt:2023-01-28 21:59:08 +0000 UTC,ContainerID:containerd://3c804912d08457484a63eb55fa8c390aeb5f93be17d351527a1feb97a631c128,}} Ready:false RestartCount:3 Image:registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2 ImageID:sha256:ef97fd17575d534d8bc2960bbf1e744379f3ac6e86b9b97974e086f1516b75e5 ContainerID:containerd://3c804912d08457484a63eb55fa8c390aeb5f93be17d351527a1feb97a631c128 Started:0xc0014b31ff}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 21:59:42.371: INFO: Retrieving log for container kube-system/kube-proxy-bootstrap-e2e-minion-group-jdvv/kube-proxy, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-jdvv/log?container=kube-proxy&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 21:59:42.371: INFO: Retrieving log for the last terminated container kube-system/kube-proxy-bootstrap-e2e-minion-group-jdvv/kube-proxy, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-jdvv/log?container=kube-proxy&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 21:59:42.371: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:58:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:58:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:38 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP:10.64.3.17 PodIPs:[{IP:10.64.3.17}] StartTime:2023-01-28 21:53:38 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 1m20s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(699caeb5-2b49-4d25-998b-e11af5bff8c6),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 21:58:03 +0000 UTC,FinishedAt:2023-01-28 21:58:29 +0000 UTC,ContainerID:containerd://af31181e930ff30e7572c1007523f86dcd2edb5145f67bb8e0783df25cfeda11,}} Ready:false RestartCount:4 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://af31181e930ff30e7572c1007523f86dcd2edb5145f67bb8e0783df25cfeda11 Started:0xc00567397f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 28 21:59:42.411: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 21:59:42.411: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 21:59:42.411: INFO: Node bootstrap-e2e-minion-group-jdvv failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 21:59:42.411 < Exit [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/28/23 21:59:42.411 (3m14.171s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 21:59:42.411 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 21:59:42.412 Jan 28 21:59:42.451: INFO: Unexpected error: <*url.Error | 0xc0024125a0>: { Op: "Get", URL: "https://35.230.109.193/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc002c784b0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0026706c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 109, 193], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc005862720>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.230.109.193/api/v1/namespaces/kube-system/events": dial tcp 35.230.109.193:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 21:59:42.451 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 21:59:42.451 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 21:59:42.451 Jan 28 21:59:42.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 21:59:42.491 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 21:59:42.491 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 21:59:42.491 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 21:59:42.491 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 21:59:42.491 STEP: Collecting events from namespace "reboot-1090". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 21:59:42.491 Jan 28 21:59:42.530: INFO: Unexpected error: failed to list events in namespace "reboot-1090": <*url.Error | 0xc0026706f0>: { Op: "Get", URL: "https://35.230.109.193/api/v1/namespaces/reboot-1090/events", Err: <*net.OpError | 0xc0028ce960>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0026770e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 109, 193], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004507300>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 21:59:42.531 (40ms) [FAILED] failed to list events in namespace "reboot-1090": Get "https://35.230.109.193/api/v1/namespaces/reboot-1090/events": dial tcp 35.230.109.193:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/28/23 21:59:42.531 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 21:59:42.531 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 21:59:42.531 STEP: Destroying namespace "reboot-1090" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 21:59:42.531 [FAILED] Couldn't delete ns: "reboot-1090": Delete "https://35.230.109.193/api/v1/namespaces/reboot-1090": dial tcp 35.230.109.193:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.109.193/api/v1/namespaces/reboot-1090", Err:(*net.OpError)(0xc002c78a50)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/28/23 21:59:42.571 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 21:59:42.571 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 21:59:42.571 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 21:59:42.571 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sswitching\soff\sthe\snetwork\sinterface\sand\sensure\sthey\sfunction\supon\sswitch\son$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 21:59:42.411 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:56:27.945 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:56:27.945 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:56:27.945 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 21:56:27.946 Jan 28 21:56:27.946: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 21:56:27.947 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 21:56:28.077 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 21:56:28.158 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:56:28.24 (295ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 21:56:28.24 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 21:56:28.24 (0s) > Enter [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/28/23 21:56:28.24 Jan 28 21:56:28.336: INFO: Getting bootstrap-e2e-minion-group-rndd Jan 28 21:56:28.378: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-rndd condition Ready to be true Jan 28 21:56:28.387: INFO: Getting bootstrap-e2e-minion-group-gw8s Jan 28 21:56:28.387: INFO: Getting bootstrap-e2e-minion-group-jdvv Jan 28 21:56:28.420: INFO: Node bootstrap-e2e-minion-group-rndd has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 21:56:28.420: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 21:56:28.420: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8gbc7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:56:28.420: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-rndd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:56:28.430: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-gw8s condition Ready to be true Jan 28 21:56:28.430: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-jdvv condition Ready to be true Jan 28 21:56:28.463: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=true. Elapsed: 42.83729ms Jan 28 21:56:28.463: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=true. Elapsed: 42.749912ms Jan 28 21:56:28.463: INFO: Pod "metadata-proxy-v0.1-8gbc7" satisfied condition "running and ready, or succeeded" Jan 28 21:56:28.463: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd" satisfied condition "running and ready, or succeeded" Jan 28 21:56:28.463: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 21:56:28.463: INFO: Getting external IP address for bootstrap-e2e-minion-group-rndd Jan 28 21:56:28.463: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-rndd(34.145.37.78:22) Jan 28 21:56:28.473: INFO: Node bootstrap-e2e-minion-group-gw8s has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 21:56:28.473: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 21:56:28.473: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xkczn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:56:28.473: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:56:28.473: INFO: Node bootstrap-e2e-minion-group-jdvv has 4 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-rtgpq] Jan 28 21:56:28.473: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-rtgpq] Jan 28 21:56:28.473: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-rtgpq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:56:28.473: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:56:28.473: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xp6b5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:56:28.473: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:56:28.517: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s": Phase="Running", Reason="", readiness=true. Elapsed: 43.996436ms Jan 28 21:56:28.517: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" satisfied condition "running and ready, or succeeded" Jan 28 21:56:28.518: INFO: Pod "metadata-proxy-v0.1-xkczn": Phase="Running", Reason="", readiness=true. Elapsed: 45.757976ms Jan 28 21:56:28.518: INFO: Pod "metadata-proxy-v0.1-xkczn" satisfied condition "running and ready, or succeeded" Jan 28 21:56:28.518: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 21:56:28.518: INFO: Getting external IP address for bootstrap-e2e-minion-group-gw8s Jan 28 21:56:28.518: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-gw8s(34.105.20.128:22) Jan 28 21:56:28.520: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 46.554567ms Jan 28 21:56:28.520: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 21:56:28.520: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Running", Reason="", readiness=true. Elapsed: 47.328453ms Jan 28 21:56:28.520: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq" satisfied condition "running and ready, or succeeded" Jan 28 21:56:28.521: INFO: Pod "metadata-proxy-v0.1-xp6b5": Phase="Running", Reason="", readiness=true. Elapsed: 47.842257ms Jan 28 21:56:28.521: INFO: Pod "metadata-proxy-v0.1-xp6b5" satisfied condition "running and ready, or succeeded" Jan 28 21:56:28.521: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=true. Elapsed: 48.00027ms Jan 28 21:56:28.521: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" satisfied condition "running and ready, or succeeded" Jan 28 21:56:28.521: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-rtgpq] Jan 28 21:56:28.521: INFO: Getting external IP address for bootstrap-e2e-minion-group-jdvv Jan 28 21:56:28.521: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-jdvv(34.127.24.56:22) Jan 28 21:56:28.986: INFO: ssh prow@34.145.37.78:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 28 21:56:28.986: INFO: ssh prow@34.145.37.78:22: stdout: "" Jan 28 21:56:28.986: INFO: ssh prow@34.145.37.78:22: stderr: "" Jan 28 21:56:28.986: INFO: ssh prow@34.145.37.78:22: exit code: 0 Jan 28 21:56:28.986: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-rndd condition Ready to be false Jan 28 21:56:29.029: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:29.045: INFO: ssh prow@34.127.24.56:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 28 21:56:29.045: INFO: ssh prow@34.127.24.56:22: stdout: "" Jan 28 21:56:29.045: INFO: ssh prow@34.127.24.56:22: stderr: "" Jan 28 21:56:29.045: INFO: ssh prow@34.127.24.56:22: exit code: 0 Jan 28 21:56:29.045: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-jdvv condition Ready to be false Jan 28 21:56:29.045: INFO: ssh prow@34.105.20.128:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 28 21:56:29.045: INFO: ssh prow@34.105.20.128:22: stdout: "" Jan 28 21:56:29.045: INFO: ssh prow@34.105.20.128:22: stderr: "" Jan 28 21:56:29.045: INFO: ssh prow@34.105.20.128:22: exit code: 0 Jan 28 21:56:29.045: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-gw8s condition Ready to be false Jan 28 21:56:29.098: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:29.098: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:31.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:31.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:31.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:33.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:33.186: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:33.186: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:35.158: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:35.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:35.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:37.205: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:37.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:37.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:39.249: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:39.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:39.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:41.291: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:41.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:41.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:43.336: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:43.405: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:43.405: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:45.379: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:45.449: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:45.449: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:47.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:47.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:47.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:49.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:49.537: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:49.537: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:51.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:51.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:51.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:53.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:53.625: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:53.625: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:55.595: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:55.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:55.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:57.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:57.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:57.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:59.681: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:59.759: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:56:59.759: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:01.724: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:01.803: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:01.803: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:03.769: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:03.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:03.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:05.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:05.890: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:05.890: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:07.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:08.053: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:08.053: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:09.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:10.097: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:10.097: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:12.041: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:12.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:12.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:14.085: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:14.187: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:14.187: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:16.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:16.231: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:16.231: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:18.173: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-rndd condition Ready to be true Jan 28 21:57:18.215: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:18.276: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-gw8s condition Ready to be true Jan 28 21:57:18.276: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:18.318: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:20.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:20.319: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:57:20.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:22.303: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:22.364: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-jdvv condition Ready to be true Jan 28 21:57:22.405: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:22.406: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:24.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:24.451: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:24.451: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:26.393: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:26.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:26.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:28.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:28.540: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:28.540: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:30.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:30.584: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:30.584: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:32.522: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:32.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:32.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:34.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:34.673: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:34.673: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:36.608: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:36.718: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:36.718: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:38.653: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:38.763: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:38.763: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:40.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:40.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:40.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:42.764: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:42.851: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:42.851: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:44.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:44.896: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:44.896: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:46.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:46.940: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:46.940: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:48.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:48.984: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:48.984: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:50.937: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:51.028: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:51.028: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:52.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:53.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:53.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:55.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:55.119: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:55.119: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:57.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:57.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:57.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:59.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:57:59.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:57:59.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:01.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:01.256: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:01.256: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:03.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:03.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:03.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:05.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:05.345: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:05.345: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:07.298: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:07.390: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:07.390: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:09.340: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:09.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:09.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:11.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:11.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:11.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:13.430: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:13.524: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:13.524: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:15.475: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:15.568: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:15.568: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:17.517: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:17.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:17.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:19.560: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:19.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:19.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:21.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:21.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:21.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:23.648: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:23.748: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:23.748: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:25.693: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:25.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:25.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:27.736: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:27.836: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:27.836: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:29.778: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:29.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:29.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:31.822: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:31.924: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:31.924: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:33.866: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:33.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:33.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:35.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:36.012: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:36.012: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:37.969: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:38.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:38.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:40.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:40.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:40.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:42.058: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 21:58:42.058: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8gbc7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:58:42.059: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-rndd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:58:42.102: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 43.856067ms Jan 28 21:58:42.102: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 43.730427ms Jan 28 21:58:42.102: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:42.102: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:42.149: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 21:57:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:42.149: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:44.147: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 2.088303079s Jan 28 21:58:44.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:44.147: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 2.088265733s Jan 28 21:58:44.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:44.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:44.201: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:58:46.147: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 4.088181682s Jan 28 21:58:46.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:46.147: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 4.088063457s Jan 28 21:58:46.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:46.243: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 21:57:21 +0000 UTC}]. Failure Jan 28 21:58:46.244: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-rtgpq] Jan 28 21:58:46.244: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-rtgpq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:58:46.244: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xp6b5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:58:46.244: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:58:46.245: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:58:46.290: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Running", Reason="", readiness=true. Elapsed: 45.877861ms Jan 28 21:58:46.290: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq" satisfied condition "running and ready, or succeeded" Jan 28 21:58:46.291: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.71068ms Jan 28 21:58:46.291: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:58:46.292: INFO: Pod "metadata-proxy-v0.1-xp6b5": Phase="Running", Reason="", readiness=false. Elapsed: 47.176651ms Jan 28 21:58:46.292: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-xp6b5' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 21:58:46.292: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=true. Elapsed: 47.29268ms Jan 28 21:58:46.292: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" satisfied condition "running and ready, or succeeded" Jan 28 21:58:48.146: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 6.087886017s Jan 28 21:58:48.146: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 6.087723959s Jan 28 21:58:48.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:48.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:48.287: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 21:58:48.287: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xkczn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:58:48.287: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:58:48.331: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s": Phase="Running", Reason="", readiness=true. Elapsed: 43.385731ms Jan 28 21:58:48.331: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" satisfied condition "running and ready, or succeeded" Jan 28 21:58:48.331: INFO: Pod "metadata-proxy-v0.1-xkczn": Phase="Running", Reason="", readiness=true. Elapsed: 43.462632ms Jan 28 21:58:48.331: INFO: Pod "metadata-proxy-v0.1-xkczn" satisfied condition "running and ready, or succeeded" Jan 28 21:58:48.331: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 21:58:48.331: INFO: Reboot successful on node bootstrap-e2e-minion-group-gw8s Jan 28 21:58:48.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.089661271s Jan 28 21:58:48.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:58:48.335: INFO: Pod "metadata-proxy-v0.1-xp6b5": Phase="Running", Reason="", readiness=false. Elapsed: 2.090908203s Jan 28 21:58:48.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-xp6b5' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 21:58:50.147: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 8.088276558s Jan 28 21:58:50.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:50.147: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 8.088323691s Jan 28 21:58:50.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:50.335: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.090844986s Jan 28 21:58:50.335: INFO: Pod "metadata-proxy-v0.1-xp6b5": Phase="Running", Reason="", readiness=false. Elapsed: 4.090948605s Jan 28 21:58:50.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:58:50.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-xp6b5' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 21:58:52.147: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 10.088478383s Jan 28 21:58:52.147: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 10.088645728s Jan 28 21:58:52.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:52.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:52.335: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.090242684s Jan 28 21:58:52.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:58:52.336: INFO: Pod "metadata-proxy-v0.1-xp6b5": Phase="Running", Reason="", readiness=true. Elapsed: 6.091518323s Jan 28 21:58:52.336: INFO: Pod "metadata-proxy-v0.1-xp6b5" satisfied condition "running and ready, or succeeded" Jan 28 21:58:54.148: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 12.089605017s Jan 28 21:58:54.148: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 12.08945048s Jan 28 21:58:54.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:54.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:54.335: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.090152426s Jan 28 21:58:54.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:58:56.163: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 14.104685911s Jan 28 21:58:56.163: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:56.163: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 14.104936407s Jan 28 21:58:56.163: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:56.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.088098463s Jan 28 21:58:56.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:58:58.147: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 16.088195899s Jan 28 21:58:58.147: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 16.08835885s Jan 28 21:58:58.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:58.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:58:58.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.089195576s Jan 28 21:58:58.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:00.146: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 18.087615169s Jan 28 21:59:00.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:00.146: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 18.087839895s Jan 28 21:59:00.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:00.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.089451391s Jan 28 21:59:00.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:02.147: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 20.088463472s Jan 28 21:59:02.147: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 20.088639931s Jan 28 21:59:02.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:02.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:02.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.089067849s Jan 28 21:59:02.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:04.147: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 22.088006388s Jan 28 21:59:04.147: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 22.088166172s Jan 28 21:59:04.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:04.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:04.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.089919292s Jan 28 21:59:04.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:06.148: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 24.089390693s Jan 28 21:59:06.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:06.148: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 24.089822251s Jan 28 21:59:06.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:06.335: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.090545388s Jan 28 21:59:06.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:08.185: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 26.12695643s Jan 28 21:59:08.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:08.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 26.126918912s Jan 28 21:59:08.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:08.335: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.090797111s Jan 28 21:59:08.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:10.148: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 28.089160884s Jan 28 21:59:10.148: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 28.089348437s Jan 28 21:59:10.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:10.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:10.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.088719097s Jan 28 21:59:10.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:12.192: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 30.13340568s Jan 28 21:59:12.192: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 30.133585288s Jan 28 21:59:12.192: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:12.192: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:12.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.089098804s Jan 28 21:59:12.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:14.175: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 32.116929463s Jan 28 21:59:14.176: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:14.177: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 32.118409398s Jan 28 21:59:14.177: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:14.336: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.091554302s Jan 28 21:59:14.336: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:16.147: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 34.088212309s Jan 28 21:59:16.147: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 34.088056248s Jan 28 21:59:16.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:16.147: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:16.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.088797902s Jan 28 21:59:16.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:18.149: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 36.090363849s Jan 28 21:59:18.149: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:18.149: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 36.090262748s Jan 28 21:59:18.149: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:18.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.089283352s Jan 28 21:59:18.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:20.146: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 38.087755088s Jan 28 21:59:20.146: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 38.087925995s Jan 28 21:59:20.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:20.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:20.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.088431386s Jan 28 21:59:20.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:22.148: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 40.088970483s Jan 28 21:59:22.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:22.148: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 40.089264616s Jan 28 21:59:22.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:22.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.089972185s Jan 28 21:59:22.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:24.148: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 42.08988853s Jan 28 21:59:24.148: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 42.090056899s Jan 28 21:59:24.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:24.148: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:24.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.08966298s Jan 28 21:59:24.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:26.146: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 44.087576929s Jan 28 21:59:26.146: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 44.087777633s Jan 28 21:59:26.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:26.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:26.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.088623072s Jan 28 21:59:26.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:28.145: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 46.087057472s Jan 28 21:59:28.146: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 46.087014641s Jan 28 21:59:28.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:28.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:28.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.088905207s Jan 28 21:59:28.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:30.146: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 48.087778399s Jan 28 21:59:30.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:54:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:30.146: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=false. Elapsed: 48.088008963s Jan 28 21:59:30.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-8gbc7' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:57:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 21:59:30.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.088339021s Jan 28 21:59:30.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:32.146: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=true. Elapsed: 50.087793426s Jan 28 21:59:32.146: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd" satisfied condition "running and ready, or succeeded" Jan 28 21:59:32.148: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=true. Elapsed: 50.089482879s Jan 28 21:59:32.148: INFO: Pod "metadata-proxy-v0.1-8gbc7" satisfied condition "running and ready, or succeeded" Jan 28 21:59:32.148: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 21:59:32.148: INFO: Reboot successful on node bootstrap-e2e-minion-group-rndd Jan 28 21:59:32.335: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.09049544s Jan 28 21:59:32.335: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:34.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.08929958s Jan 28 21:59:34.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:36.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.088563756s Jan 28 21:59:36.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:38.334: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.089096947s Jan 28 21:59:38.334: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:40.333: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.088840616s Jan 28 21:59:40.333: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:30 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 21:59:42.332: INFO: Encountered non-retryable error while getting pod kube-system/volume-snapshot-controller-0: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0": dial tcp 35.230.109.193:443: connect: connection refused Jan 28 21:59:42.332: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 28 21:59:42.332: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-rtgpq] Jan 28 21:59:42.332: INFO: Status for not ready pod kube-system/kube-proxy-bootstrap-e2e-minion-group-jdvv: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:22 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:59:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:59:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:22 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP:10.138.0.3 PodIPs:[{IP:10.138.0.3}] StartTime:2023-01-28 21:53:22 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 40s restarting failed container=kube-proxy pod=kube-proxy-bootstrap-e2e-minion-group-jdvv_kube-system(e126030fe08b481bd93bca8e2433b514),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 21:57:46 +0000 UTC,FinishedAt:2023-01-28 21:59:08 +0000 UTC,ContainerID:containerd://3c804912d08457484a63eb55fa8c390aeb5f93be17d351527a1feb97a631c128,}} Ready:false RestartCount:3 Image:registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2 ImageID:sha256:ef97fd17575d534d8bc2960bbf1e744379f3ac6e86b9b97974e086f1516b75e5 ContainerID:containerd://3c804912d08457484a63eb55fa8c390aeb5f93be17d351527a1feb97a631c128 Started:0xc0014b31ff}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 21:59:42.371: INFO: Retrieving log for container kube-system/kube-proxy-bootstrap-e2e-minion-group-jdvv/kube-proxy, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-jdvv/log?container=kube-proxy&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 21:59:42.371: INFO: Retrieving log for the last terminated container kube-system/kube-proxy-bootstrap-e2e-minion-group-jdvv/kube-proxy, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-jdvv/log?container=kube-proxy&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 21:59:42.371: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:58:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:58:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:53:38 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP:10.64.3.17 PodIPs:[{IP:10.64.3.17}] StartTime:2023-01-28 21:53:38 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 1m20s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(699caeb5-2b49-4d25-998b-e11af5bff8c6),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 21:58:03 +0000 UTC,FinishedAt:2023-01-28 21:58:29 +0000 UTC,ContainerID:containerd://af31181e930ff30e7572c1007523f86dcd2edb5145f67bb8e0783df25cfeda11,}} Ready:false RestartCount:4 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://af31181e930ff30e7572c1007523f86dcd2edb5145f67bb8e0783df25cfeda11 Started:0xc00567397f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 28 21:59:42.411: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 21:59:42.411: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://35.230.109.193/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 35.230.109.193:443: connect: connection refused: Jan 28 21:59:42.411: INFO: Node bootstrap-e2e-minion-group-jdvv failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 21:59:42.411 < Exit [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/28/23 21:59:42.411 (3m14.171s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 21:59:42.411 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 21:59:42.412 Jan 28 21:59:42.451: INFO: Unexpected error: <*url.Error | 0xc0024125a0>: { Op: "Get", URL: "https://35.230.109.193/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc002c784b0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0026706c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 109, 193], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc005862720>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.230.109.193/api/v1/namespaces/kube-system/events": dial tcp 35.230.109.193:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 21:59:42.451 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 21:59:42.451 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 21:59:42.451 Jan 28 21:59:42.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 21:59:42.491 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 21:59:42.491 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 21:59:42.491 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 21:59:42.491 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 21:59:42.491 STEP: Collecting events from namespace "reboot-1090". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 21:59:42.491 Jan 28 21:59:42.530: INFO: Unexpected error: failed to list events in namespace "reboot-1090": <*url.Error | 0xc0026706f0>: { Op: "Get", URL: "https://35.230.109.193/api/v1/namespaces/reboot-1090/events", Err: <*net.OpError | 0xc0028ce960>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0026770e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 109, 193], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004507300>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 21:59:42.531 (40ms) [FAILED] failed to list events in namespace "reboot-1090": Get "https://35.230.109.193/api/v1/namespaces/reboot-1090/events": dial tcp 35.230.109.193:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/28/23 21:59:42.531 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 21:59:42.531 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 21:59:42.531 STEP: Destroying namespace "reboot-1090" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 21:59:42.531 [FAILED] Couldn't delete ns: "reboot-1090": Delete "https://35.230.109.193/api/v1/namespaces/reboot-1090": dial tcp 35.230.109.193:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.109.193/api/v1/namespaces/reboot-1090", Err:(*net.OpError)(0xc002c78a50)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/28/23 21:59:42.571 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 21:59:42.571 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 21:59:42.571 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 21:59:42.571 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\striggering\skernel\spanic\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 22:05:37.559
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 22:00:12.824 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 22:00:12.824 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 22:00:12.824 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 22:00:12.824 Jan 28 22:00:12.824: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 22:00:12.825 Jan 28 22:00:12.864: INFO: Unexpected error while creating namespace: Post "https://35.230.109.193/api/v1/namespaces": dial tcp 35.230.109.193:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 22:01:32.473 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 22:01:32.604 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 22:01:32.756 (1m19.932s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 22:01:32.756 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 22:01:32.756 (0s) > Enter [It] each node by triggering kernel panic and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:109 @ 01/28/23 22:01:32.756 Jan 28 22:01:32.982: INFO: Getting bootstrap-e2e-minion-group-gw8s Jan 28 22:01:32.982: INFO: Getting bootstrap-e2e-minion-group-rndd Jan 28 22:01:32.983: INFO: Getting bootstrap-e2e-minion-group-jdvv Jan 28 22:01:33.026: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-gw8s condition Ready to be true Jan 28 22:01:33.046: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-jdvv condition Ready to be true Jan 28 22:01:33.046: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-rndd condition Ready to be true Jan 28 22:01:33.068: INFO: Node bootstrap-e2e-minion-group-gw8s has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:01:33.068: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:01:33.068: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xkczn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:01:33.069: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:01:33.090: INFO: Node bootstrap-e2e-minion-group-rndd has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:01:33.090: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:01:33.090: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8gbc7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:01:33.090: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-rndd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:01:33.090: INFO: Node bootstrap-e2e-minion-group-jdvv has 4 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-rtgpq] Jan 28 22:01:33.090: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-rtgpq] Jan 28 22:01:33.090: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-rtgpq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:01:33.090: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xp6b5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:01:33.090: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:01:33.090: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:01:33.112: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s": Phase="Running", Reason="", readiness=true. Elapsed: 42.972207ms Jan 28 22:01:33.112: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" satisfied condition "running and ready, or succeeded" Jan 28 22:01:33.112: INFO: Pod "metadata-proxy-v0.1-xkczn": Phase="Running", Reason="", readiness=true. Elapsed: 43.228412ms Jan 28 22:01:33.112: INFO: Pod "metadata-proxy-v0.1-xkczn" satisfied condition "running and ready, or succeeded" Jan 28 22:01:33.112: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:01:33.112: INFO: Getting external IP address for bootstrap-e2e-minion-group-gw8s Jan 28 22:01:33.112: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-gw8s(34.105.20.128:22) Jan 28 22:01:33.137: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq": Phase="Running", Reason="", readiness=true. Elapsed: 47.638206ms Jan 28 22:01:33.137: INFO: Pod "kube-dns-autoscaler-5f6455f985-rtgpq" satisfied condition "running and ready, or succeeded" Jan 28 22:01:33.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.526838ms Jan 28 22:01:33.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:01:33.139: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=true. Elapsed: 48.96055ms Jan 28 22:01:33.139: INFO: Pod "metadata-proxy-v0.1-8gbc7" satisfied condition "running and ready, or succeeded" Jan 28 22:01:33.139: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 48.945267ms Jan 28 22:01:33.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:01:33.140: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 49.845121ms Jan 28 22:01:33.140: INFO: Pod "metadata-proxy-v0.1-xp6b5": Phase="Running", Reason="", readiness=true. Elapsed: 49.935802ms Jan 28 22:01:33.140: INFO: Pod "metadata-proxy-v0.1-xp6b5" satisfied condition "running and ready, or succeeded" Jan 28 22:01:33.140: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:01:33.628: INFO: ssh prow@34.105.20.128:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 28 22:01:33.628: INFO: ssh prow@34.105.20.128:22: stdout: "" Jan 28 22:01:33.628: INFO: ssh prow@34.105.20.128:22: stderr: "" Jan 28 22:01:33.628: INFO: ssh prow@34.105.20.128:22: exit code: 0 Jan 28 22:01:33.628: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-gw8s condition Ready to be false Jan 28 22:01:33.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:01:35.182: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.092615321s Jan 28 22:01:35.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:01:35.183: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 2.092993138s Jan 28 22:01:35.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:01:35.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 2.093641029s Jan 28 22:01:35.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:01:35.712: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:01:37.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 4.095721801s Jan 28 22:01:37.185: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.09549361s Jan 28 22:01:37.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:01:37.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:01:37.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 4.095601861s Jan 28 22:01:37.186: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:01:37.754: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:01:39.183: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.093478218s Jan 28 22:01:39.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:01:39.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 6.095153398s Jan 28 22:01:39.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:01:39.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 6.094980978s Jan 28 22:01:39.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:01:39.797: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:01:41.182: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.092582s Jan 28 22:01:41.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:01:41.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 8.09404335s Jan 28 22:01:41.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:01:41.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 8.094377055s Jan 28 22:01:41.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:01:41.840: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:01:43.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 10.093984017s Jan 28 22:01:43.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:01:43.184: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.094125215s Jan 28 22:01:43.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 10.094386487s Jan 28 22:01:43.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:01:43.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:01:43.883: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:01:45.182: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 12.091975037s Jan 28 22:01:45.182: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:01:45.183: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 12.093595846s Jan 28 22:01:45.183: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.093607993s Jan 28 22:01:45.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:01:45.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:01:45.927: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:01:47.186: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 14.096102256s Jan 28 22:01:47.186: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:01:47.186: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.096216801s Jan 28 22:01:47.186: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:01:47.186: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 14.096547764s Jan 28 22:01:47.186: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:01:47.970: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:01:49.184: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.094249796s Jan 28 22:01:49.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:01:49.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 16.095475074s Jan 28 22:01:49.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:01:49.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 16.09581563s Jan 28 22:01:49.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:01:50.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:01:51.183: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.093352182s Jan 28 22:01:51.183: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 18.093625192s Jan 28 22:01:51.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:01:51.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:01:51.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 18.094652131s Jan 28 22:01:51.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:01:52.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:01:53.183: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.092886688s Jan 28 22:01:53.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:01:53.183: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 20.093255559s Jan 28 22:01:53.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:01:53.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 20.094109511s Jan 28 22:01:53.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:01:54.122: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:01:55.181: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.091593266s Jan 28 22:01:55.182: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:01:55.183: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 22.093201968s Jan 28 22:01:55.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:01:55.183: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 22.093284766s Jan 28 22:01:55.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:01:56.165: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:01:57.184: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.094589922s Jan 28 22:01:57.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:01:57.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 24.094928223s Jan 28 22:01:57.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:01:57.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 24.095396302s Jan 28 22:01:57.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:01:58.207: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:01:59.182: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.092198251s Jan 28 22:01:59.182: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:01:59.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 26.093946095s Jan 28 22:01:59.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:01:59.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 26.093775524s Jan 28 22:01:59.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:00.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:01.183: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.093124004s Jan 28 22:02:01.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:01.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 28.094641116s Jan 28 22:02:01.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 28.094396994s Jan 28 22:02:01.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:02:01.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:02.294: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:03.183: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.093277203s Jan 28 22:02:03.183: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 30.093281996s Jan 28 22:02:03.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:03.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:03.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 30.094721617s Jan 28 22:02:03.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:02:04.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:05.182: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.091977414s Jan 28 22:02:05.182: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:05.183: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 32.093800439s Jan 28 22:02:05.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:02:05.183: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 32.093588021s Jan 28 22:02:05.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:06.381: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:07.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 34.095122642s Jan 28 22:02:07.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 34.095395761s Jan 28 22:02:07.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:07.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:02:07.185: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.09522864s Jan 28 22:02:07.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:08.425: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:09.183: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.092757593s Jan 28 22:02:09.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:09.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 36.094240108s Jan 28 22:02:09.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 36.093985141s Jan 28 22:02:09.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:02:09.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:10.468: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:11.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 38.09464303s Jan 28 22:02:11.185: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.094658799s Jan 28 22:02:11.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:11.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:11.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 38.095014949s Jan 28 22:02:11.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:02:12.510: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:13.184: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.093739453s Jan 28 22:02:13.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 40.093749825s Jan 28 22:02:13.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:13.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:13.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 40.094146881s Jan 28 22:02:13.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:02:14.553: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:15.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=false. Elapsed: 42.095046745s Jan 28 22:02:15.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-rndd' on 'bootstrap-e2e-minion-group-rndd' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:25 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:20 +0000 UTC }] Jan 28 22:02:15.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 42.094825357s Jan 28 22:02:15.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:15.185: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.094891814s Jan 28 22:02:15.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:16.596: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:17.184: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.093898332s Jan 28 22:02:17.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:17.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=true. Elapsed: 44.095414665s Jan 28 22:02:17.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd" satisfied condition "running and ready, or succeeded" Jan 28 22:02:17.185: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:02:17.185: INFO: Getting external IP address for bootstrap-e2e-minion-group-rndd Jan 28 22:02:17.185: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-rndd(34.145.37.78:22) Jan 28 22:02:17.185: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 44.09522117s Jan 28 22:02:17.185: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:17.708: INFO: ssh prow@34.145.37.78:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 28 22:02:17.708: INFO: ssh prow@34.145.37.78:22: stdout: "" Jan 28 22:02:17.708: INFO: ssh prow@34.145.37.78:22: stderr: "" Jan 28 22:02:17.708: INFO: ssh prow@34.145.37.78:22: exit code: 0 Jan 28 22:02:17.708: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-rndd condition Ready to be false Jan 28 22:02:17.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:18.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:19.182: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.09199319s Jan 28 22:02:19.182: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:19.183: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 46.093484632s Jan 28 22:02:19.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:19.793: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:20.681: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:21.182: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.091760154s Jan 28 22:02:21.182: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:21.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 48.093641534s Jan 28 22:02:21.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:21.836: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:22.724: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-gw8s condition Ready to be true Jan 28 22:02:22.766: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:02:23.182: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.092057565s Jan 28 22:02:23.182: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:23.183: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 50.093147336s Jan 28 22:02:23.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:23.879: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:24.809: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:02:25.181: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.091235235s Jan 28 22:02:25.181: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:25.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 52.093840461s Jan 28 22:02:25.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:25.922: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:26.851: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:02:27.183: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.092721385s Jan 28 22:02:27.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:27.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 54.094138623s Jan 28 22:02:27.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:27.965: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:28.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:02:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:02:27 +0000 UTC}]. Failure Jan 28 22:02:29.182: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.092526489s Jan 28 22:02:29.182: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:29.183: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 56.093464784s Jan 28 22:02:29.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:30.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:30.937: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:02:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:02:27 +0000 UTC}]. Failure Jan 28 22:02:31.181: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.091303054s Jan 28 22:02:31.181: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:31.183: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 58.093407971s Jan 28 22:02:31.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:32.054: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:32.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:02:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:02:27 +0000 UTC}]. Failure Jan 28 22:02:33.180: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.090571974s Jan 28 22:02:33.181: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:33.183: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.093092421s Jan 28 22:02:33.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:37.523: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.433130263s Jan 28 22:02:37.523: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:37.524: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:37.525: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:02:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:02:27 +0000 UTC}]. Failure Jan 28 22:02:37.537: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.447261655s Jan 28 22:02:37.537: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:39.182: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.092109616s Jan 28 22:02:39.182: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:39.183: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.093475027s Jan 28 22:02:39.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:39.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:02:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:02:27 +0000 UTC}]. Failure Jan 28 22:02:39.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:41.180: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.089985626s Jan 28 22:02:41.180: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:41.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.094228088s Jan 28 22:02:41.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:41.618: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:02:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:02:27 +0000 UTC}]. Failure Jan 28 22:02:41.618: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:43.181: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.091540137s Jan 28 22:02:43.181: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:43.183: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.0929616s Jan 28 22:02:43.183: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:43.664: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:43.664: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:02:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:02:27 +0000 UTC}]. Failure Jan 28 22:02:45.181: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.09155394s Jan 28 22:02:45.181: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:45.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.093840621s Jan 28 22:02:45.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:45.707: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:45.708: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:02:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:02:27 +0000 UTC}]. Failure Jan 28 22:02:47.181: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.091554208s Jan 28 22:02:47.181: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:47.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.094220882s Jan 28 22:02:47.184: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jdvv' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:01:21 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:02:47.752: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:47.752: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:02:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:02:27 +0000 UTC}]. Failure Jan 28 22:02:49.181: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.091321791s Jan 28 22:02:49.181: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:49.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv": Phase="Running", Reason="", readiness=true. Elapsed: 1m16.093847968s Jan 28 22:02:49.184: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jdvv" satisfied condition "running and ready, or succeeded" Jan 28 22:02:49.796: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:02:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:02:27 +0000 UTC}]. Failure Jan 28 22:02:49.796: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:51.181: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.091146195s Jan 28 22:02:51.181: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:51.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:51.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:02:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:02:27 +0000 UTC}]. Failure Jan 28 22:02:53.180: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.090434509s Jan 28 22:02:53.180: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-jdvv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:00:06 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:38 +0000 UTC }] Jan 28 22:02:53.885: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:53.885: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:02:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:02:27 +0000 UTC}]. Failure Jan 28 22:02:55.181: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 1m22.090782455s Jan 28 22:02:55.181: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 22:02:55.181: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-jdvv metadata-proxy-v0.1-xp6b5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-rtgpq] Jan 28 22:02:55.181: INFO: Getting external IP address for bootstrap-e2e-minion-group-jdvv Jan 28 22:02:55.181: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-jdvv(34.127.24.56:22) Jan 28 22:02:55.705: INFO: ssh prow@34.127.24.56:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 28 22:02:55.705: INFO: ssh prow@34.127.24.56:22: stdout: "" Jan 28 22:02:55.705: INFO: ssh prow@34.127.24.56:22: stderr: "" Jan 28 22:02:55.705: INFO: ssh prow@34.127.24.56:22: exit code: 0 Jan 28 22:02:55.705: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-jdvv condition Ready to be false Jan 28 22:02:55.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:55.929: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:55.929: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:02:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:02:27 +0000 UTC}]. Failure Jan 28 22:02:57.839: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:57.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:02:57.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:02:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:02:27 +0000 UTC}]. Failure Jan 28 22:02:59.883: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:00.018: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:00.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:02:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:02:27 +0000 UTC}]. Failure Jan 28 22:03:01.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:02.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:02.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-gw8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:02:22 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:02:27 +0000 UTC}]. Failure Jan 28 22:03:03.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:04.120: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-rndd condition Ready to be true Jan 28 22:03:04.120: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:03:04.120: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xkczn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:03:04.120: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:03:04.166: INFO: Pod "metadata-proxy-v0.1-xkczn": Phase="Running", Reason="", readiness=false. Elapsed: 45.263513ms Jan 28 22:03:04.166: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-xkczn' on 'bootstrap-e2e-minion-group-gw8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:02:22 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:03:02 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:22 +0000 UTC }] Jan 28 22:03:04.166: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s": Phase="Running", Reason="", readiness=false. Elapsed: 45.287283ms Jan 28 22:03:04.166: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-gw8s' on 'bootstrap-e2e-minion-group-gw8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 22:02:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:58:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:53:21 +0000 UTC }] Jan 28 22:03:04.166: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:03:06.020: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:06.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 22:03:06.212: INFO: Pod "metadata-proxy-v0.1-xkczn": Phase="Running", Reason="", readiness=true. Elapsed: 2.09180216s Jan 28 22:03:06.212: INFO: Pod "metadata-proxy-v0.1-xkczn" satisfied condition "running and ready, or succeeded" Jan 28 22:03:06.212: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s": Phase="Running", Reason="", readiness=true. Elapsed: 2.091810539s Jan 28 22:03:06.212: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-gw8s" satisfied condition "running and ready, or succeeded" Jan 28 22:03:06.212: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-gw8s metadata-proxy-v0.1-xkczn] Jan 28 22:03:06.212: INFO: Reboot successful on node bootstrap-e2e-minion-group-gw8s Jan 28 22:03:08.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:08.255: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:10.106: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:10.298: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:12.149: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:12.342: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:14.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:14.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:16.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:16.428: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:18.277: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:18.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:20.320: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:20.514: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:22.365: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:22.558: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:24.409: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:24.601: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:26.452: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:26.644: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:28.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:28.689: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:30.538: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:30.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:32.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:32.773: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:34.630: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:34.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:36.673: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:36.860: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:38.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:38.903: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:40.758: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:40.946: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:42.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:42.990: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:44.843: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:45.032: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:46.885: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:47.097: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:48.928: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:49.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:50.973: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:51.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:53.016: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:53.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:55.060: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:55.275: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:57.104: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:57.319: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:03:59.148: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:03:59.363: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:01.191: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:01.406: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:03.234: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:03.449: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:05.277: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:05.491: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:07.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:07.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:09.367: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:09.578: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:11.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:11.631: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:13.453: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:13.674: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:15.496: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:15.717: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:17.539: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:17.760: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:19.582: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:19.803: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:21.625: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:21.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:23.668: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:23.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:25.712: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:25.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:27.755: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:27.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:29.798: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:30.017: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:31.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:32.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:33.885: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:34.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:35.929: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:36.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:37.971: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:38.207: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:40.014: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:40.250: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:42.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:42.294: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:44.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:44.341: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:46.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:46.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:48.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:48.427: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:50.256: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:50.470: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:52.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:52.514: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:54.344: INFO: Condition Ready of node bootstrap-e2e-minion-group-jdvv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 22:04:54.556: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:56.345: INFO: Node bootstrap-e2e-minion-group-jdvv didn't reach desired Ready condition status (false) within 2m0s Jan 28 22:04:56.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:04:58.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:05:00.710: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:05:02.753: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:05:04.798: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:05:06.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:05:08.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:05:10.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:05:12.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:05:15.018: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:05:17.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:05:19.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:05:21.166: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:05:23.209: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:05:25.252: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:05:27.296: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:05:29.340: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 22:03:02 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 22:03:07 +0000 UTC}]. Failure Jan 28 22:05:31.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 22:05:30 +0000 UTC}]. Failure Jan 28 22:05:33.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 22:05:30 +0000 UTC}]. Failure Jan 28 22:05:35.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-rndd is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 22:05:30 +0000 UTC}]. Failure Jan 28 22:05:37.515: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:05:37.515: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-8gbc7" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:05:37.515: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-rndd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 22:05:37.558: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd": Phase="Running", Reason="", readiness=true. Elapsed: 43.284799ms Jan 28 22:05:37.558: INFO: Pod "metadata-proxy-v0.1-8gbc7": Phase="Running", Reason="", readiness=true. Elapsed: 43.337171ms Jan 28 22:05:37.558: INFO: Pod "metadata-proxy-v0.1-8gbc7" satisfied condition "running and ready, or succeeded" Jan 28 22:05:37.558: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-rndd" satisfied condition "running and ready, or succeeded" Jan 28 22:05:37.558: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-rndd metadata-proxy-v0.1-8gbc7] Jan 28 22:05:37.558: INFO: Reboot successful on node bootstrap-e2e-minion-group-rndd Jan 28 22:05:37.558: INFO: Node bootstrap-e2e-minion-group-jdvv failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 22:05:37.559 < Exit [It] each node by triggering kernel panic and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:109 @ 01/28/23 22:05:37.559 (4m4.803s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 22:05:37.559 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 22:05:37.559 Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-77sdd to bootstrap-e2e-minion-group-gw8s Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.010007141s (1.010017589s including waiting) Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container coredns Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container coredns Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container coredns Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {node-controller } NodeNotReady: Node is not ready Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-77sdd Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-77sdd Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container coredns Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container coredns Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Unhealthy: Readiness probe failed: Get "http://10.64.2.9:8181/ready": dial tcp 10.64.2.9:8181: connect: connection refused Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container coredns Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-77sdd: {kubelet bootstrap-e2e-minion-group-gw8s} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-77sdd_kube-system(db0c09f1-c4d8-4e56-ab71-b0803b234d20) Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-8xrbf to bootstrap-e2e-minion-group-jdvv Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.107628334s (2.107641232s including waiting) Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container coredns Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container coredns Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container coredns Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {node-controller } NodeNotReady: Node is not ready Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Readiness probe failed: Get "http://10.64.3.15:8181/ready": dial tcp 10.64.3.15:8181: connect: connection refused Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-8xrbf_kube-system(f16a4d9b-c0c6-4f1c-94d6-b9a2f091b21e) Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Readiness probe failed: Get "http://10.64.3.20:8181/ready": dial tcp 10.64.3.20:8181: connect: connection refused Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container coredns Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container coredns Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container coredns Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f-8xrbf: {kubelet bootstrap-e2e-minion-group-jdvv} Unhealthy: Readiness probe failed: Get "http://10.64.3.28:8181/ready": dial tcp 10.64.3.28:8181: connect: connection refused Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-8xrbf Jan 28 22:05:37.621: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-77sdd Jan 28 22:05:37.621: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 22:05:37.621: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 22:05:37.621: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 22:05:37.621: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 22:05:37.621: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 22:05:37.621: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 28 22:05:37.621: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:05:37.621: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 28 22:05:37.621: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 22:05:37.621: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 22:05:37.621: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:05:37.621: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 28 22:05:37.621: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b3a39 became leader Jan 28 22:05:37.621: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_5712c became leader Jan 28 22:05:37.621: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_da42f became leader Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-npfvc to bootstrap-e2e-minion-group-gw8s Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 620.414125ms (620.448513ms including waiting) Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {node-controller } NodeNotReady: Node is not ready Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-npfvc_kube-system(cd16d88d-4ef4-4c9a-96df-86fb4c70ef13) Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {node-controller } NodeNotReady: Node is not ready Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Created: Created container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Started: Started container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} Killing: Stopping container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent-npfvc: {kubelet bootstrap-e2e-minion-group-gw8s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-npfvc_kube-system(cd16d88d-4ef4-4c9a-96df-86fb4c70ef13) Jan 28 22:05:37.621: INFO: event for konnectivity-agent-t5bmd: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-t5bmd to bootstrap-e2e-minion-group-jdvv Jan 28 22:05:37.621: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 22:05:37.621: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.384242476s (1.38425164s including waiting) Jan 28 22:05:37.621: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:05:37.621: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:05:37.621: INFO: event for konnectivity-agent-t5bmd: {node-controller } NodeNotReady: Node is not ready Jan 28 22:05:37.621: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-t5bmd_kube-system(07681149-8b9c-4c0d-bb8b-75eaf2c0c570) Jan 28 22:05:37.621: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:05:37.621: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:05:37.621: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Created: Created container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Started: Started container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent-t5bmd: {kubelet bootstrap-e2e-minion-group-jdvv} Killing: Stopping container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent-twq5s: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-twq5s to bootstrap-e2e-minion-group-rndd Jan 28 22:05:37.621: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 22:05:37.621: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 602.431484ms (602.449236ms including waiting) Jan 28 22:05:37.621: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Killing: Stopping container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:05:37.621: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:05:37.621: INFO: event for konnectivity-agent-twq5s: {node-controller } NodeNotReady: Node is not ready Jan 28 22:05:37.621: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-twq5s_kube-system(de9ecb8f-d586-41fd-a04d-41f45f7ea0bf) Jan 28 22:05:37.621: INFO: event for konnectivity-agent-twq5s: {node-controller } NodeNotReady: Node is not ready Jan 28 22:05:37.621: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 22:05:37.621: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 22:05:37.621: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Created: Created container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent-twq5s: {kubelet bootstrap-e2e-minion-group-rndd} Started: Started container konnectivity-agent Jan 28 22:05:37.621: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-t5bmd Jan 28 22:05:37.621: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-twq5s Jan 28 22:05:37.621: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-npfvc Jan 28 22:05:37.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 22:05:37.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 22:05:37.621: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivi