go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 21:02:32.675 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:00:12.737 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:00:12.737 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:00:12.737 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 21:00:12.737 Jan 28 21:00:12.737: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 21:00:12.739 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 21:00:12.866 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 21:00:12.947 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:00:13.029 (292ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 21:00:13.029 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 21:00:13.029 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/28/23 21:00:13.029 Jan 28 21:00:13.124: INFO: Getting bootstrap-e2e-minion-group-jq3j Jan 28 21:00:13.124: INFO: Getting bootstrap-e2e-minion-group-g05r Jan 28 21:00:13.124: INFO: Getting bootstrap-e2e-minion-group-bs1f Jan 28 21:00:13.166: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-jq3j condition Ready to be true Jan 28 21:00:13.199: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-g05r condition Ready to be true Jan 28 21:00:13.199: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-bs1f condition Ready to be true Jan 28 21:00:13.211: INFO: Node bootstrap-e2e-minion-group-jq3j has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:00:13.211: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:00:13.211: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-x44dw" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:00:13.211: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jq3j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:00:13.243: INFO: Node bootstrap-e2e-minion-group-g05r has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-2dsmd kube-proxy-bootstrap-e2e-minion-group-g05r] Jan 28 21:00:13.243: INFO: Node bootstrap-e2e-minion-group-bs1f has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-8gc49 kube-proxy-bootstrap-e2e-minion-group-bs1f metadata-proxy-v0.1-2vpw5 volume-snapshot-controller-0] Jan 28 21:00:13.243: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-2dsmd kube-proxy-bootstrap-e2e-minion-group-g05r] Jan 28 21:00:13.243: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-8gc49 kube-proxy-bootstrap-e2e-minion-group-bs1f metadata-proxy-v0.1-2vpw5 volume-snapshot-controller-0] Jan 28 21:00:13.243: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-g05r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:00:13.243: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:00:13.243: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-8gc49" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:00:13.244: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-bs1f" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:00:13.244: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-2vpw5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:00:13.244: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-2dsmd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:00:13.253: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jq3j": Phase="Running", Reason="", readiness=true. Elapsed: 42.828853ms Jan 28 21:00:13.254: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jq3j" satisfied condition "running and ready, or succeeded" Jan 28 21:00:13.254: INFO: Pod "metadata-proxy-v0.1-x44dw": Phase="Running", Reason="", readiness=true. Elapsed: 42.960621ms Jan 28 21:00:13.254: INFO: Pod "metadata-proxy-v0.1-x44dw" satisfied condition "running and ready, or succeeded" Jan 28 21:00:13.254: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:00:13.254: INFO: Getting external IP address for bootstrap-e2e-minion-group-jq3j Jan 28 21:00:13.254: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-jq3j(35.247.4.220:22) Jan 28 21:00:13.289: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 45.346508ms Jan 28 21:00:13.289: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 21:00:13.289: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=true. Elapsed: 45.583204ms Jan 28 21:00:13.289: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49" satisfied condition "running and ready, or succeeded" Jan 28 21:00:13.290: INFO: Pod "metadata-proxy-v0.1-2vpw5": Phase="Running", Reason="", readiness=true. Elapsed: 46.444512ms Jan 28 21:00:13.290: INFO: Pod "metadata-proxy-v0.1-2vpw5" satisfied condition "running and ready, or succeeded" Jan 28 21:00:13.290: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g05r": Phase="Running", Reason="", readiness=true. Elapsed: 46.67642ms Jan 28 21:00:13.290: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g05r" satisfied condition "running and ready, or succeeded" Jan 28 21:00:13.290: INFO: Pod "metadata-proxy-v0.1-2dsmd": Phase="Running", Reason="", readiness=true. Elapsed: 46.480358ms Jan 28 21:00:13.290: INFO: Pod "metadata-proxy-v0.1-2dsmd" satisfied condition "running and ready, or succeeded" Jan 28 21:00:13.290: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-2dsmd kube-proxy-bootstrap-e2e-minion-group-g05r] Jan 28 21:00:13.290: INFO: Getting external IP address for bootstrap-e2e-minion-group-g05r Jan 28 21:00:13.290: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-g05r(34.168.227.18:22) Jan 28 21:00:13.290: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=true. Elapsed: 46.613298ms Jan 28 21:00:13.290: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f" satisfied condition "running and ready, or succeeded" Jan 28 21:00:13.290: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-8gc49 kube-proxy-bootstrap-e2e-minion-group-bs1f metadata-proxy-v0.1-2vpw5 volume-snapshot-controller-0] Jan 28 21:00:13.290: INFO: Getting external IP address for bootstrap-e2e-minion-group-bs1f Jan 28 21:00:13.290: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-bs1f(34.168.154.4:22) Jan 28 21:00:13.777: INFO: ssh prow@35.247.4.220:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 21:00:13.777: INFO: ssh prow@35.247.4.220:22: stdout: "" Jan 28 21:00:13.777: INFO: ssh prow@35.247.4.220:22: stderr: "" Jan 28 21:00:13.777: INFO: ssh prow@35.247.4.220:22: exit code: 0 Jan 28 21:00:13.777: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-jq3j condition Ready to be false Jan 28 21:00:13.814: INFO: ssh prow@34.168.154.4:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 21:00:13.814: INFO: ssh prow@34.168.154.4:22: stdout: "" Jan 28 21:00:13.814: INFO: ssh prow@34.168.154.4:22: stderr: "" Jan 28 21:00:13.814: INFO: ssh prow@34.168.154.4:22: exit code: 0 Jan 28 21:00:13.814: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-bs1f condition Ready to be false Jan 28 21:00:13.817: INFO: ssh prow@34.168.227.18:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 21:00:13.817: INFO: ssh prow@34.168.227.18:22: stdout: "" Jan 28 21:00:13.817: INFO: ssh prow@34.168.227.18:22: stderr: "" Jan 28 21:00:13.817: INFO: ssh prow@34.168.227.18:22: exit code: 0 Jan 28 21:00:13.817: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-g05r condition Ready to be false Jan 28 21:00:13.820: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:13.857: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:13.859: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:15.864: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:15.900: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:15.901: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:17.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:17.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:17.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:19.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:20.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:20.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:22.001: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:22.044: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:22.046: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:24.045: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:24.087: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:24.089: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:26.091: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:26.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:26.132: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:28.134: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:28.171: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:28.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:30.176: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:30.213: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:30.218: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:32.221: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:32.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:32.261: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:34.264: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:34.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:34.304: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:36.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:36.344: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:36.347: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:38.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:38.387: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:38.390: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:40.397: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:40.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:40.433: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:42.441: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:42.475: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:42.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:44.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:44.518: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:44.519: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:46.531: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:46.561: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:46.562: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:48.575: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:48.605: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:48.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:50.619: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:50.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:50.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:52.663: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:52.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:52.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:54.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:54.743: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:54.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:56.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:56.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:56.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:58.794: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:58.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:58.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:01:00.838: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:01:00.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:01:00.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:01:02.882: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:01:02.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:01:02.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:01:04.964: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-jq3j condition Ready to be true Jan 28 21:01:04.986: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-bs1f condition Ready to be true Jan 28 21:01:04.987: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-g05r condition Ready to be true Jan 28 21:01:05.046: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:05.058: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:05.058: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:07.089: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:07.103: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:07.103: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:09.132: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:09.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:09.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:11.175: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:11.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:11.190: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:13.219: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:13.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:13.234: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:15.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:15.276: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:15.277: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:17.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:17.322: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:17.322: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:19.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:19.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:19.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:21.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:21.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:21.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:23.438: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:23.457: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:23.457: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:25.481: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:25.501: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:25.501: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:27.525: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:27.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:27.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:29.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:29.597: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:29.597: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:31.613: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:31.649: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:31.649: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:33.659: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:33.697: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:33.697: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:35.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:35.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:35.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:37.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:37.789: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:37.789: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:39.793: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:39.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:39.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:41.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:41.900: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:41.900: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:43.901: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:43.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:43.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:45.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:45.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:45.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:47.993: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:48.045: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:48.045: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:50.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:50.101: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:50.101: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:52.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:52.177: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:52.177: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:54.161: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:54.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:54.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:56.205: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:56.277: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:56.277: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:58.249: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:58.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:58.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:00.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:00.367: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:00.367: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:02.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:02.433: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:02.433: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:04.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:04.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:04.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:06.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:06.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:06.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:08.467: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:08.567: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:08.571: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:10.511: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:10.611: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:10.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:12.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:12.654: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:12.657: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:14.608: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:14.698: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:14.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:16.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:16.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:16.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:18.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:18.784: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:18.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:20.739: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:20.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:20.828: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:22.783: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:22.874: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:22.874: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:24.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:24.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:24.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:26.867: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:26.960: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-8gc49 kube-proxy-bootstrap-e2e-minion-group-bs1f metadata-proxy-v0.1-2vpw5 volume-snapshot-controller-0] Jan 28 21:02:26.960: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:02:26.960: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-8gc49" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:02:26.960: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-2vpw5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:02:26.960: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-bs1f" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:02:26.961: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:27.005: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 44.859873ms Jan 28 21:02:27.005: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 21:02:27.005: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=true. Elapsed: 44.883776ms Jan 28 21:02:27.005: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49" satisfied condition "running and ready, or succeeded" Jan 28 21:02:27.005: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 44.770648ms Jan 28 21:02:27.005: INFO: Pod "metadata-proxy-v0.1-2vpw5": Phase="Running", Reason="", readiness=true. Elapsed: 44.949614ms Jan 28 21:02:27.005: INFO: Pod "metadata-proxy-v0.1-2vpw5" satisfied condition "running and ready, or succeeded" Jan 28 21:02:27.005: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:02:01 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:02:01 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:02:28.911: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:02:28.911: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-x44dw" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:02:28.911: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jq3j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:02:28.956: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jq3j": Phase="Running", Reason="", readiness=false. Elapsed: 44.324707ms Jan 28 21:02:28.956: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jq3j' on 'bootstrap-e2e-minion-group-jq3j' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:01:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:07 +0000 UTC }] Jan 28 21:02:28.956: INFO: Pod "metadata-proxy-v0.1-x44dw": Phase="Running", Reason="", readiness=false. Elapsed: 44.501448ms Jan 28 21:02:28.956: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-x44dw' on 'bootstrap-e2e-minion-group-jq3j' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:01:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:55:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:07 +0000 UTC }] Jan 28 21:02:29.004: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-2dsmd kube-proxy-bootstrap-e2e-minion-group-g05r] Jan 28 21:02:29.004: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-g05r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:02:29.004: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-2dsmd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:02:29.047: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g05r": Phase="Running", Reason="", readiness=true. Elapsed: 43.105442ms Jan 28 21:02:29.047: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g05r" satisfied condition "running and ready, or succeeded" Jan 28 21:02:29.047: INFO: Pod "metadata-proxy-v0.1-2dsmd": Phase="Running", Reason="", readiness=true. Elapsed: 43.049942ms Jan 28 21:02:29.047: INFO: Pod "metadata-proxy-v0.1-2dsmd" satisfied condition "running and ready, or succeeded" Jan 28 21:02:29.047: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-2dsmd kube-proxy-bootstrap-e2e-minion-group-g05r] Jan 28 21:02:29.047: INFO: Reboot successful on node bootstrap-e2e-minion-group-g05r Jan 28 21:02:29.048: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 2.088391905s Jan 28 21:02:29.048: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:02:01 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:02:01 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:02:30.996: INFO: Encountered non-retryable error while getting pod kube-system/kube-proxy-bootstrap-e2e-minion-group-jq3j: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-jq3j": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:30.996: INFO: Pod kube-proxy-bootstrap-e2e-minion-group-jq3j failed to be running and ready, or succeeded. Jan 28 21:02:30.996: INFO: Encountered non-retryable error while getting pod kube-system/metadata-proxy-v0.1-x44dw: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-x44dw": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:30.996: INFO: Pod metadata-proxy-v0.1-x44dw failed to be running and ready, or succeeded. Jan 28 21:02:30.996: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:02:30.996: INFO: Status for not ready pod kube-system/kube-proxy-bootstrap-e2e-minion-group-jq3j: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:07 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:01:04 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:11 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP:10.138.0.3 PodIPs:[{IP:10.138.0.3}] StartTime:2023-01-28 20:52:07 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-28 20:55:32 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-28 20:52:10 +0000 UTC,FinishedAt:2023-01-28 20:54:33 +0000 UTC,ContainerID:containerd://61bf60d33ad616b29d859c8d46e2aec7137266a3e853076d9fb0815374f30c30,}} Ready:true RestartCount:2 Image:registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426 ImageID:sha256:7cfe96b1b0a6dab2250fd7fe9d39abd4ae7fc2b1797108dee1d98e2415ede8aa ContainerID:containerd://bd9ecb149f940a9b1c7df763af2a962319ae3ce4189b978424a929c0ff4121be Started:0xc0011e5a07}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 21:02:31.036: INFO: Retrieving log for container kube-system/kube-proxy-bootstrap-e2e-minion-group-jq3j/kube-proxy, err: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-jq3j/log?container=kube-proxy&previous=false": dial tcp 34.105.32.116:443: connect: connection refused: Jan 28 21:02:31.036: INFO: Retrieving log for the last terminated container kube-system/kube-proxy-bootstrap-e2e-minion-group-jq3j/kube-proxy, err: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-jq3j/log?container=kube-proxy&previous=false": dial tcp 34.105.32.116:443: connect: connection refused: Jan 28 21:02:31.036: INFO: Status for not ready pod kube-system/metadata-proxy-v0.1-x44dw: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:01:04 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:55:34 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP:10.138.0.3 PodIPs:[{IP:10.138.0.3}] StartTime:2023-01-28 20:52:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:metadata-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-28 20:55:33 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-28 20:52:09 +0000 UTC,FinishedAt:2023-01-28 20:54:33 +0000 UTC,ContainerID:containerd://81fe1ed7cb83523fc7632e69593498efda5caac0972da30c19daeab4a85a2bb8,}} Ready:true RestartCount:1 Image:registry.k8s.io/metadata-proxy:v0.1.12 ImageID:registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a ContainerID:containerd://80bd387296ebb88832ee8a5f61ff66b50b40ddbd0d7c1b609d284113c2b303b8 Started:0xc000ea2b67} {Name:prometheus-to-sd-exporter State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-28 20:55:33 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-28 20:52:11 +0000 UTC,FinishedAt:2023-01-28 20:54:33 +0000 UTC,ContainerID:containerd://9c8ca0caa690d10e1d2185ed2342de435e3e2224ed8886243f2891bb33ae7aa7,}} Ready:true RestartCount:1 Image:gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1 ImageID:gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 ContainerID:containerd://131f715876c5d26054c7efa287fc021028973270a70f75c9aac8697570fae038 Started:0xc000ea2b6f}] QOSClass:Guaranteed EphemeralContainerStatuses:[]} Jan 28 21:02:31.044: INFO: Encountered non-retryable error while getting pod kube-system/kube-proxy-bootstrap-e2e-minion-group-bs1f: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-bs1f": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:31.044: INFO: Pod kube-proxy-bootstrap-e2e-minion-group-bs1f failed to be running and ready, or succeeded. Jan 28 21:02:31.044: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-dns-autoscaler-5f6455f985-8gc49 kube-proxy-bootstrap-e2e-minion-group-bs1f metadata-proxy-v0.1-2vpw5 volume-snapshot-controller-0] Jan 28 21:02:31.045: INFO: Status for not ready pod kube-system/kube-proxy-bootstrap-e2e-minion-group-bs1f: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:02:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:02:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP:10.138.0.5 PodIPs:[{IP:10.138.0.5}] StartTime:2023-01-28 20:52:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 40s restarting failed container=kube-proxy pod=kube-proxy-bootstrap-e2e-minion-group-bs1f_kube-system(22272a191c0d024a253f7f4807e9b7a0),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 20:58:13 +0000 UTC,FinishedAt:2023-01-28 21:02:01 +0000 UTC,ContainerID:containerd://6667d1d016339fc53a08612593cba670e32542fd3e43499b05e226e475f05710,}} Ready:false RestartCount:5 Image:registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426 ImageID:sha256:7cfe96b1b0a6dab2250fd7fe9d39abd4ae7fc2b1797108dee1d98e2415ede8aa ContainerID:containerd://6667d1d016339fc53a08612593cba670e32542fd3e43499b05e226e475f05710 Started:0xc00377d98f}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 21:02:31.075: INFO: Retrieving log for container kube-system/metadata-proxy-v0.1-x44dw/metadata-proxy, err: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-x44dw/log?container=metadata-proxy&previous=false": dial tcp 34.105.32.116:443: connect: connection refused: Jan 28 21:02:31.075: INFO: Retrieving log for the last terminated container kube-system/metadata-proxy-v0.1-x44dw/metadata-proxy, err: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-x44dw/log?container=metadata-proxy&previous=false": dial tcp 34.105.32.116:443: connect: connection refused: Jan 28 21:02:31.084: INFO: Retrieving log for container kube-system/kube-proxy-bootstrap-e2e-minion-group-bs1f/kube-proxy, err: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-bs1f/log?container=kube-proxy&previous=false": dial tcp 34.105.32.116:443: connect: connection refused: Jan 28 21:02:31.084: INFO: Retrieving log for the last terminated container kube-system/kube-proxy-bootstrap-e2e-minion-group-bs1f/kube-proxy, err: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-bs1f/log?container=kube-proxy&previous=false": dial tcp 34.105.32.116:443: connect: connection refused: Jan 28 21:02:31.115: INFO: Retrieving log for container kube-system/metadata-proxy-v0.1-x44dw/prometheus-to-sd-exporter, err: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-x44dw/log?container=prometheus-to-sd-exporter&previous=false": dial tcp 34.105.32.116:443: connect: connection refused: Jan 28 21:02:31.115: INFO: Retrieving log for the last terminated container kube-system/metadata-proxy-v0.1-x44dw/prometheus-to-sd-exporter, err: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-x44dw/log?container=prometheus-to-sd-exporter&previous=false": dial tcp 34.105.32.116:443: connect: connection refused: Jan 28 21:02:31.115: INFO: Node bootstrap-e2e-minion-group-bs1f failed reboot test. Jan 28 21:02:31.115: INFO: Node bootstrap-e2e-minion-group-jq3j failed reboot test. Jan 28 21:02:31.115: INFO: Executing termination hook on nodes Jan 28 21:02:31.115: INFO: Getting external IP address for bootstrap-e2e-minion-group-bs1f Jan 28 21:02:31.115: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-bs1f(34.168.154.4:22) Jan 28 21:02:31.631: INFO: ssh prow@34.168.154.4:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 21:02:31.631: INFO: ssh prow@34.168.154.4:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 21:00:23 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 21:02:31.631: INFO: ssh prow@34.168.154.4:22: stderr: "" Jan 28 21:02:31.631: INFO: ssh prow@34.168.154.4:22: exit code: 0 Jan 28 21:02:31.631: INFO: Getting external IP address for bootstrap-e2e-minion-group-g05r Jan 28 21:02:31.631: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-g05r(34.168.227.18:22) Jan 28 21:02:32.155: INFO: ssh prow@34.168.227.18:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 21:02:32.155: INFO: ssh prow@34.168.227.18:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 21:00:23 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 21:02:32.155: INFO: ssh prow@34.168.227.18:22: stderr: "" Jan 28 21:02:32.155: INFO: ssh prow@34.168.227.18:22: exit code: 0 Jan 28 21:02:32.155: INFO: Getting external IP address for bootstrap-e2e-minion-group-jq3j Jan 28 21:02:32.155: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-jq3j(35.247.4.220:22) Jan 28 21:02:32.675: INFO: ssh prow@35.247.4.220:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 21:02:32.675: INFO: ssh prow@35.247.4.220:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 21:00:23 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 21:02:32.675: INFO: ssh prow@35.247.4.220:22: stderr: "" Jan 28 21:02:32.675: INFO: ssh prow@35.247.4.220:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 21:02:32.675 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/28/23 21:02:32.675 (2m19.646s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 21:02:32.675 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 21:02:32.675 Jan 28 21:02:32.715: INFO: Unexpected error: <*url.Error | 0xc00349cae0>: { Op: "Get", URL: "https://34.105.32.116/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc0025a88c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0037610b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 105, 32, 116], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001067560>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://34.105.32.116/api/v1/namespaces/kube-system/events": dial tcp 34.105.32.116:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 21:02:32.715 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 21:02:32.715 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 21:02:32.715 Jan 28 21:02:32.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 21:02:32.755 (39ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 21:02:32.755 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 21:02:32.755 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 21:02:32.755 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 21:02:32.755 STEP: Collecting events from namespace "reboot-9048". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 21:02:32.755 Jan 28 21:02:32.794: INFO: Unexpected error: failed to list events in namespace "reboot-9048": <*url.Error | 0xc00349d020>: { Op: "Get", URL: "https://34.105.32.116/api/v1/namespaces/reboot-9048/events", Err: <*net.OpError | 0xc0025a8ff0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002fc5e90>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 105, 32, 116], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0010678a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 21:02:32.795 (40ms) [FAILED] failed to list events in namespace "reboot-9048": Get "https://34.105.32.116/api/v1/namespaces/reboot-9048/events": dial tcp 34.105.32.116:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/28/23 21:02:32.795 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 21:02:32.795 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 21:02:32.795 STEP: Destroying namespace "reboot-9048" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 21:02:32.795 [FAILED] Couldn't delete ns: "reboot-9048": Delete "https://34.105.32.116/api/v1/namespaces/reboot-9048": dial tcp 34.105.32.116:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.105.32.116/api/v1/namespaces/reboot-9048", Err:(*net.OpError)(0xc00202f310)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/28/23 21:02:32.835 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 21:02:32.835 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 21:02:32.835 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 21:02:32.835 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 21:02:32.675 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:00:12.737 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:00:12.737 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:00:12.737 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 21:00:12.737 Jan 28 21:00:12.737: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 21:00:12.739 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 21:00:12.866 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 21:00:12.947 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:00:13.029 (292ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 21:00:13.029 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 21:00:13.029 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/28/23 21:00:13.029 Jan 28 21:00:13.124: INFO: Getting bootstrap-e2e-minion-group-jq3j Jan 28 21:00:13.124: INFO: Getting bootstrap-e2e-minion-group-g05r Jan 28 21:00:13.124: INFO: Getting bootstrap-e2e-minion-group-bs1f Jan 28 21:00:13.166: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-jq3j condition Ready to be true Jan 28 21:00:13.199: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-g05r condition Ready to be true Jan 28 21:00:13.199: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-bs1f condition Ready to be true Jan 28 21:00:13.211: INFO: Node bootstrap-e2e-minion-group-jq3j has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:00:13.211: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:00:13.211: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-x44dw" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:00:13.211: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jq3j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:00:13.243: INFO: Node bootstrap-e2e-minion-group-g05r has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-2dsmd kube-proxy-bootstrap-e2e-minion-group-g05r] Jan 28 21:00:13.243: INFO: Node bootstrap-e2e-minion-group-bs1f has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-8gc49 kube-proxy-bootstrap-e2e-minion-group-bs1f metadata-proxy-v0.1-2vpw5 volume-snapshot-controller-0] Jan 28 21:00:13.243: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-2dsmd kube-proxy-bootstrap-e2e-minion-group-g05r] Jan 28 21:00:13.243: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-8gc49 kube-proxy-bootstrap-e2e-minion-group-bs1f metadata-proxy-v0.1-2vpw5 volume-snapshot-controller-0] Jan 28 21:00:13.243: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-g05r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:00:13.243: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:00:13.243: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-8gc49" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:00:13.244: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-bs1f" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:00:13.244: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-2vpw5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:00:13.244: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-2dsmd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:00:13.253: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jq3j": Phase="Running", Reason="", readiness=true. Elapsed: 42.828853ms Jan 28 21:00:13.254: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jq3j" satisfied condition "running and ready, or succeeded" Jan 28 21:00:13.254: INFO: Pod "metadata-proxy-v0.1-x44dw": Phase="Running", Reason="", readiness=true. Elapsed: 42.960621ms Jan 28 21:00:13.254: INFO: Pod "metadata-proxy-v0.1-x44dw" satisfied condition "running and ready, or succeeded" Jan 28 21:00:13.254: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:00:13.254: INFO: Getting external IP address for bootstrap-e2e-minion-group-jq3j Jan 28 21:00:13.254: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-jq3j(35.247.4.220:22) Jan 28 21:00:13.289: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 45.346508ms Jan 28 21:00:13.289: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 21:00:13.289: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=true. Elapsed: 45.583204ms Jan 28 21:00:13.289: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49" satisfied condition "running and ready, or succeeded" Jan 28 21:00:13.290: INFO: Pod "metadata-proxy-v0.1-2vpw5": Phase="Running", Reason="", readiness=true. Elapsed: 46.444512ms Jan 28 21:00:13.290: INFO: Pod "metadata-proxy-v0.1-2vpw5" satisfied condition "running and ready, or succeeded" Jan 28 21:00:13.290: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g05r": Phase="Running", Reason="", readiness=true. Elapsed: 46.67642ms Jan 28 21:00:13.290: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g05r" satisfied condition "running and ready, or succeeded" Jan 28 21:00:13.290: INFO: Pod "metadata-proxy-v0.1-2dsmd": Phase="Running", Reason="", readiness=true. Elapsed: 46.480358ms Jan 28 21:00:13.290: INFO: Pod "metadata-proxy-v0.1-2dsmd" satisfied condition "running and ready, or succeeded" Jan 28 21:00:13.290: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-2dsmd kube-proxy-bootstrap-e2e-minion-group-g05r] Jan 28 21:00:13.290: INFO: Getting external IP address for bootstrap-e2e-minion-group-g05r Jan 28 21:00:13.290: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-g05r(34.168.227.18:22) Jan 28 21:00:13.290: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=true. Elapsed: 46.613298ms Jan 28 21:00:13.290: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f" satisfied condition "running and ready, or succeeded" Jan 28 21:00:13.290: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-8gc49 kube-proxy-bootstrap-e2e-minion-group-bs1f metadata-proxy-v0.1-2vpw5 volume-snapshot-controller-0] Jan 28 21:00:13.290: INFO: Getting external IP address for bootstrap-e2e-minion-group-bs1f Jan 28 21:00:13.290: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-bs1f(34.168.154.4:22) Jan 28 21:00:13.777: INFO: ssh prow@35.247.4.220:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 21:00:13.777: INFO: ssh prow@35.247.4.220:22: stdout: "" Jan 28 21:00:13.777: INFO: ssh prow@35.247.4.220:22: stderr: "" Jan 28 21:00:13.777: INFO: ssh prow@35.247.4.220:22: exit code: 0 Jan 28 21:00:13.777: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-jq3j condition Ready to be false Jan 28 21:00:13.814: INFO: ssh prow@34.168.154.4:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 21:00:13.814: INFO: ssh prow@34.168.154.4:22: stdout: "" Jan 28 21:00:13.814: INFO: ssh prow@34.168.154.4:22: stderr: "" Jan 28 21:00:13.814: INFO: ssh prow@34.168.154.4:22: exit code: 0 Jan 28 21:00:13.814: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-bs1f condition Ready to be false Jan 28 21:00:13.817: INFO: ssh prow@34.168.227.18:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 28 21:00:13.817: INFO: ssh prow@34.168.227.18:22: stdout: "" Jan 28 21:00:13.817: INFO: ssh prow@34.168.227.18:22: stderr: "" Jan 28 21:00:13.817: INFO: ssh prow@34.168.227.18:22: exit code: 0 Jan 28 21:00:13.817: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-g05r condition Ready to be false Jan 28 21:00:13.820: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:13.857: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:13.859: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:15.864: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:15.900: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:15.901: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:17.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:17.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:17.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:19.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:20.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:20.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:22.001: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:22.044: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:22.046: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:24.045: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:24.087: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:24.089: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:26.091: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:26.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:26.132: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:28.134: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:28.171: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:28.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:30.176: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:30.213: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:30.218: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:32.221: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:32.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:32.261: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:34.264: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:34.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:34.304: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:36.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:36.344: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:36.347: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:38.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:38.387: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:38.390: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:40.397: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:40.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:40.433: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:42.441: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:42.475: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:42.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:44.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:44.518: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:44.519: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:46.531: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:46.561: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:46.562: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:48.575: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:48.605: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:48.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:50.619: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:50.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:50.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:52.663: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:52.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:52.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:54.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:54.743: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:54.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:56.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:56.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:56.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:58.794: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:58.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:00:58.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:01:00.838: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:01:00.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:01:00.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:01:02.882: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:01:02.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:01:02.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:01:04.964: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-jq3j condition Ready to be true Jan 28 21:01:04.986: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-bs1f condition Ready to be true Jan 28 21:01:04.987: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-g05r condition Ready to be true Jan 28 21:01:05.046: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:05.058: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:05.058: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:07.089: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:07.103: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:07.103: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:09.132: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:09.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:09.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:11.175: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:11.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:11.190: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:13.219: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:13.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:13.234: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:15.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:15.276: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:15.277: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:17.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:17.322: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:17.322: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:19.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:19.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:19.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:21.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:21.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:21.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:23.438: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:23.457: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:23.457: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:25.481: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:25.501: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:25.501: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:27.525: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:27.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:27.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:29.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:29.597: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:29.597: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:31.613: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:31.649: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:31.649: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:33.659: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:33.697: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:33.697: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:35.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:35.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:35.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:37.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:37.789: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:37.789: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:39.793: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:39.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:39.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:41.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:41.900: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:41.900: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:43.901: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:43.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:43.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:45.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:45.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:45.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:47.993: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:48.045: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:48.045: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:50.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:50.101: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:50.101: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:52.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:52.177: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:52.177: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:54.161: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:54.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:54.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:56.205: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:56.277: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:56.277: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:58.249: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:58.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:01:58.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:00.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:00.367: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:00.367: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:02.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:02.433: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:02.433: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:04.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:04.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:04.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:06.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:06.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:06.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:08.467: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:08.567: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:08.571: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:10.511: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:10.611: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:10.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:12.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:12.654: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:12.657: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:14.608: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:14.698: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:14.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:16.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:16.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:16.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:18.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:18.784: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:18.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:20.739: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:20.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:20.828: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:22.783: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:22.874: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:22.874: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:24.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:24.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-bs1f is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:24.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:26.867: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:26.960: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-8gc49 kube-proxy-bootstrap-e2e-minion-group-bs1f metadata-proxy-v0.1-2vpw5 volume-snapshot-controller-0] Jan 28 21:02:26.960: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:02:26.960: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-8gc49" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:02:26.960: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-2vpw5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:02:26.960: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-bs1f" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:02:26.961: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:02:27.005: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 44.859873ms Jan 28 21:02:27.005: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 21:02:27.005: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=true. Elapsed: 44.883776ms Jan 28 21:02:27.005: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49" satisfied condition "running and ready, or succeeded" Jan 28 21:02:27.005: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 44.770648ms Jan 28 21:02:27.005: INFO: Pod "metadata-proxy-v0.1-2vpw5": Phase="Running", Reason="", readiness=true. Elapsed: 44.949614ms Jan 28 21:02:27.005: INFO: Pod "metadata-proxy-v0.1-2vpw5" satisfied condition "running and ready, or succeeded" Jan 28 21:02:27.005: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:02:01 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:02:01 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:02:28.911: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:02:28.911: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-x44dw" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:02:28.911: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jq3j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:02:28.956: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jq3j": Phase="Running", Reason="", readiness=false. Elapsed: 44.324707ms Jan 28 21:02:28.956: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-jq3j' on 'bootstrap-e2e-minion-group-jq3j' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:01:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:07 +0000 UTC }] Jan 28 21:02:28.956: INFO: Pod "metadata-proxy-v0.1-x44dw": Phase="Running", Reason="", readiness=false. Elapsed: 44.501448ms Jan 28 21:02:28.956: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-x44dw' on 'bootstrap-e2e-minion-group-jq3j' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:01:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:55:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:07 +0000 UTC }] Jan 28 21:02:29.004: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-2dsmd kube-proxy-bootstrap-e2e-minion-group-g05r] Jan 28 21:02:29.004: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-g05r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:02:29.004: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-2dsmd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:02:29.047: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g05r": Phase="Running", Reason="", readiness=true. Elapsed: 43.105442ms Jan 28 21:02:29.047: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g05r" satisfied condition "running and ready, or succeeded" Jan 28 21:02:29.047: INFO: Pod "metadata-proxy-v0.1-2dsmd": Phase="Running", Reason="", readiness=true. Elapsed: 43.049942ms Jan 28 21:02:29.047: INFO: Pod "metadata-proxy-v0.1-2dsmd" satisfied condition "running and ready, or succeeded" Jan 28 21:02:29.047: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-2dsmd kube-proxy-bootstrap-e2e-minion-group-g05r] Jan 28 21:02:29.047: INFO: Reboot successful on node bootstrap-e2e-minion-group-g05r Jan 28 21:02:29.048: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 2.088391905s Jan 28 21:02:29.048: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:02:01 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:02:01 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:02:30.996: INFO: Encountered non-retryable error while getting pod kube-system/kube-proxy-bootstrap-e2e-minion-group-jq3j: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-jq3j": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:30.996: INFO: Pod kube-proxy-bootstrap-e2e-minion-group-jq3j failed to be running and ready, or succeeded. Jan 28 21:02:30.996: INFO: Encountered non-retryable error while getting pod kube-system/metadata-proxy-v0.1-x44dw: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-x44dw": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:30.996: INFO: Pod metadata-proxy-v0.1-x44dw failed to be running and ready, or succeeded. Jan 28 21:02:30.996: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:02:30.996: INFO: Status for not ready pod kube-system/kube-proxy-bootstrap-e2e-minion-group-jq3j: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:07 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:01:04 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:11 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP:10.138.0.3 PodIPs:[{IP:10.138.0.3}] StartTime:2023-01-28 20:52:07 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-28 20:55:32 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-28 20:52:10 +0000 UTC,FinishedAt:2023-01-28 20:54:33 +0000 UTC,ContainerID:containerd://61bf60d33ad616b29d859c8d46e2aec7137266a3e853076d9fb0815374f30c30,}} Ready:true RestartCount:2 Image:registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426 ImageID:sha256:7cfe96b1b0a6dab2250fd7fe9d39abd4ae7fc2b1797108dee1d98e2415ede8aa ContainerID:containerd://bd9ecb149f940a9b1c7df763af2a962319ae3ce4189b978424a929c0ff4121be Started:0xc0011e5a07}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 21:02:31.036: INFO: Retrieving log for container kube-system/kube-proxy-bootstrap-e2e-minion-group-jq3j/kube-proxy, err: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-jq3j/log?container=kube-proxy&previous=false": dial tcp 34.105.32.116:443: connect: connection refused: Jan 28 21:02:31.036: INFO: Retrieving log for the last terminated container kube-system/kube-proxy-bootstrap-e2e-minion-group-jq3j/kube-proxy, err: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-jq3j/log?container=kube-proxy&previous=false": dial tcp 34.105.32.116:443: connect: connection refused: Jan 28 21:02:31.036: INFO: Status for not ready pod kube-system/metadata-proxy-v0.1-x44dw: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:01:04 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:55:34 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP:10.138.0.3 PodIPs:[{IP:10.138.0.3}] StartTime:2023-01-28 20:52:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:metadata-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-28 20:55:33 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-28 20:52:09 +0000 UTC,FinishedAt:2023-01-28 20:54:33 +0000 UTC,ContainerID:containerd://81fe1ed7cb83523fc7632e69593498efda5caac0972da30c19daeab4a85a2bb8,}} Ready:true RestartCount:1 Image:registry.k8s.io/metadata-proxy:v0.1.12 ImageID:registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a ContainerID:containerd://80bd387296ebb88832ee8a5f61ff66b50b40ddbd0d7c1b609d284113c2b303b8 Started:0xc000ea2b67} {Name:prometheus-to-sd-exporter State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-28 20:55:33 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-28 20:52:11 +0000 UTC,FinishedAt:2023-01-28 20:54:33 +0000 UTC,ContainerID:containerd://9c8ca0caa690d10e1d2185ed2342de435e3e2224ed8886243f2891bb33ae7aa7,}} Ready:true RestartCount:1 Image:gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1 ImageID:gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 ContainerID:containerd://131f715876c5d26054c7efa287fc021028973270a70f75c9aac8697570fae038 Started:0xc000ea2b6f}] QOSClass:Guaranteed EphemeralContainerStatuses:[]} Jan 28 21:02:31.044: INFO: Encountered non-retryable error while getting pod kube-system/kube-proxy-bootstrap-e2e-minion-group-bs1f: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-bs1f": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:31.044: INFO: Pod kube-proxy-bootstrap-e2e-minion-group-bs1f failed to be running and ready, or succeeded. Jan 28 21:02:31.044: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-dns-autoscaler-5f6455f985-8gc49 kube-proxy-bootstrap-e2e-minion-group-bs1f metadata-proxy-v0.1-2vpw5 volume-snapshot-controller-0] Jan 28 21:02:31.045: INFO: Status for not ready pod kube-system/kube-proxy-bootstrap-e2e-minion-group-bs1f: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:02:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:02:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP:10.138.0.5 PodIPs:[{IP:10.138.0.5}] StartTime:2023-01-28 20:52:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 40s restarting failed container=kube-proxy pod=kube-proxy-bootstrap-e2e-minion-group-bs1f_kube-system(22272a191c0d024a253f7f4807e9b7a0),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 20:58:13 +0000 UTC,FinishedAt:2023-01-28 21:02:01 +0000 UTC,ContainerID:containerd://6667d1d016339fc53a08612593cba670e32542fd3e43499b05e226e475f05710,}} Ready:false RestartCount:5 Image:registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426 ImageID:sha256:7cfe96b1b0a6dab2250fd7fe9d39abd4ae7fc2b1797108dee1d98e2415ede8aa ContainerID:containerd://6667d1d016339fc53a08612593cba670e32542fd3e43499b05e226e475f05710 Started:0xc00377d98f}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 21:02:31.075: INFO: Retrieving log for container kube-system/metadata-proxy-v0.1-x44dw/metadata-proxy, err: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-x44dw/log?container=metadata-proxy&previous=false": dial tcp 34.105.32.116:443: connect: connection refused: Jan 28 21:02:31.075: INFO: Retrieving log for the last terminated container kube-system/metadata-proxy-v0.1-x44dw/metadata-proxy, err: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-x44dw/log?container=metadata-proxy&previous=false": dial tcp 34.105.32.116:443: connect: connection refused: Jan 28 21:02:31.084: INFO: Retrieving log for container kube-system/kube-proxy-bootstrap-e2e-minion-group-bs1f/kube-proxy, err: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-bs1f/log?container=kube-proxy&previous=false": dial tcp 34.105.32.116:443: connect: connection refused: Jan 28 21:02:31.084: INFO: Retrieving log for the last terminated container kube-system/kube-proxy-bootstrap-e2e-minion-group-bs1f/kube-proxy, err: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-bs1f/log?container=kube-proxy&previous=false": dial tcp 34.105.32.116:443: connect: connection refused: Jan 28 21:02:31.115: INFO: Retrieving log for container kube-system/metadata-proxy-v0.1-x44dw/prometheus-to-sd-exporter, err: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-x44dw/log?container=prometheus-to-sd-exporter&previous=false": dial tcp 34.105.32.116:443: connect: connection refused: Jan 28 21:02:31.115: INFO: Retrieving log for the last terminated container kube-system/metadata-proxy-v0.1-x44dw/prometheus-to-sd-exporter, err: Get "https://34.105.32.116/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-x44dw/log?container=prometheus-to-sd-exporter&previous=false": dial tcp 34.105.32.116:443: connect: connection refused: Jan 28 21:02:31.115: INFO: Node bootstrap-e2e-minion-group-bs1f failed reboot test. Jan 28 21:02:31.115: INFO: Node bootstrap-e2e-minion-group-jq3j failed reboot test. Jan 28 21:02:31.115: INFO: Executing termination hook on nodes Jan 28 21:02:31.115: INFO: Getting external IP address for bootstrap-e2e-minion-group-bs1f Jan 28 21:02:31.115: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-bs1f(34.168.154.4:22) Jan 28 21:02:31.631: INFO: ssh prow@34.168.154.4:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 21:02:31.631: INFO: ssh prow@34.168.154.4:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 21:00:23 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 21:02:31.631: INFO: ssh prow@34.168.154.4:22: stderr: "" Jan 28 21:02:31.631: INFO: ssh prow@34.168.154.4:22: exit code: 0 Jan 28 21:02:31.631: INFO: Getting external IP address for bootstrap-e2e-minion-group-g05r Jan 28 21:02:31.631: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-g05r(34.168.227.18:22) Jan 28 21:02:32.155: INFO: ssh prow@34.168.227.18:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 21:02:32.155: INFO: ssh prow@34.168.227.18:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 21:00:23 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 21:02:32.155: INFO: ssh prow@34.168.227.18:22: stderr: "" Jan 28 21:02:32.155: INFO: ssh prow@34.168.227.18:22: exit code: 0 Jan 28 21:02:32.155: INFO: Getting external IP address for bootstrap-e2e-minion-group-jq3j Jan 28 21:02:32.155: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-jq3j(35.247.4.220:22) Jan 28 21:02:32.675: INFO: ssh prow@35.247.4.220:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 28 21:02:32.675: INFO: ssh prow@35.247.4.220:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSat Jan 28 21:00:23 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 28 21:02:32.675: INFO: ssh prow@35.247.4.220:22: stderr: "" Jan 28 21:02:32.675: INFO: ssh prow@35.247.4.220:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 21:02:32.675 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/28/23 21:02:32.675 (2m19.646s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 21:02:32.675 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 21:02:32.675 Jan 28 21:02:32.715: INFO: Unexpected error: <*url.Error | 0xc00349cae0>: { Op: "Get", URL: "https://34.105.32.116/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc0025a88c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0037610b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 105, 32, 116], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001067560>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://34.105.32.116/api/v1/namespaces/kube-system/events": dial tcp 34.105.32.116:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 21:02:32.715 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 21:02:32.715 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 21:02:32.715 Jan 28 21:02:32.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 21:02:32.755 (39ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 21:02:32.755 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 21:02:32.755 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 21:02:32.755 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 21:02:32.755 STEP: Collecting events from namespace "reboot-9048". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 21:02:32.755 Jan 28 21:02:32.794: INFO: Unexpected error: failed to list events in namespace "reboot-9048": <*url.Error | 0xc00349d020>: { Op: "Get", URL: "https://34.105.32.116/api/v1/namespaces/reboot-9048/events", Err: <*net.OpError | 0xc0025a8ff0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002fc5e90>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 105, 32, 116], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0010678a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 21:02:32.795 (40ms) [FAILED] failed to list events in namespace "reboot-9048": Get "https://34.105.32.116/api/v1/namespaces/reboot-9048/events": dial tcp 34.105.32.116:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/28/23 21:02:32.795 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 21:02:32.795 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 21:02:32.795 STEP: Destroying namespace "reboot-9048" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 21:02:32.795 [FAILED] Couldn't delete ns: "reboot-9048": Delete "https://34.105.32.116/api/v1/namespaces/reboot-9048": dial tcp 34.105.32.116:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.105.32.116/api/v1/namespaces/reboot-9048", Err:(*net.OpError)(0xc00202f310)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/28/23 21:02:32.835 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 21:02:32.835 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 21:02:32.835 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 21:02:32.835 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] wait for service account "default" in namespace "reboot-7385": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 21:11:41.315from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:09:41.266 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:09:41.266 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:09:41.266 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 21:09:41.266 Jan 28 21:09:41.266: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 21:09:41.267 Jan 28 21:11:41.314: INFO: Unexpected error: <*fmt.wrapError | 0xc00500a000>: { msg: "wait for service account \"default\" in namespace \"reboot-7385\": timed out waiting for the condition", err: <*errors.errorString | 0xc000287c80>{ s: "timed out waiting for the condition", }, } [FAILED] wait for service account "default" in namespace "reboot-7385": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 21:11:41.315 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:11:41.315 (2m0.049s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 21:11:41.315 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 21:11:41.315 Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-6s4w8 to bootstrap-e2e-minion-group-g05r Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 959.482242ms (959.515491ms including waiting) Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container coredns Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container coredns Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container coredns Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container coredns Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Killing: Stopping container coredns Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-6s4w8_kube-system(924cbee6-6cd1-4108-a373-011bb84d0d00) Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Unhealthy: Readiness probe failed: Get "http://10.64.1.6:8181/ready": dial tcp 10.64.1.6:8181: connect: connection refused Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-6s4w8 Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Unhealthy: Liveness probe failed: Get "http://10.64.1.6:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Unhealthy: Readiness probe failed: Get "http://10.64.1.6:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container coredns Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container coredns Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {default-scheduler } FailedScheduling: 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-fhlmc to bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.845509128s (3.845522943s including waiting) Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container coredns Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container coredns Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container coredns Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Readiness probe failed: Get "http://10.64.2.7:8181/ready": dial tcp 10.64.2.7:8181: connect: connection refused Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} NetworkNotReady: network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} FailedMount: MountVolume.SetUp failed for volume "config-volume" : object "kube-system"/"coredns" not registered Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container coredns Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container coredns Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container coredns Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-fhlmc Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Readiness probe failed: Get "http://10.64.2.17:8181/ready": dial tcp 10.64.2.17:8181: connect: connection refused Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-fhlmc_kube-system(05e23121-6d9c-4eff-9475-84347fef8c9a) Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Readiness probe failed: Get "http://10.64.2.20:8181/ready": dial tcp 10.64.2.20:8181: connect: connection refused Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-fhlmc Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-fhlmc Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-6s4w8 Jan 28 21:11:41.368: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 21:11:41.368: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 21:11:41.368: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 21:11:41.368: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 21:11:41.368: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 21:11:41.368: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 28 21:11:41.368: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 21:11:41.368: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(29ec3e483e58679ee5f59a6031c5e501) Jan 28 21:11:41.368: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 21:11:41.368: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 21:11:41.368: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 21:11:41.368: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 21:11:41.368: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(f4f6d281abb01fd97fbab9898b841ee8) Jan 28 21:11:41.368: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_6915b became leader Jan 28 21:11:41.368: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b0b3b became leader Jan 28 21:11:41.368: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_956fb became leader Jan 28 21:11:41.368: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1c8a7 became leader Jan 28 21:11:41.368: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_81052 became leader Jan 28 21:11:41.368: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_d293d became leader Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-fx6jw to bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.49666404s (1.496680076s including waiting) Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} NetworkNotReady: network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-fx6jw_kube-system(904c0f67-24cb-4230-b7fd-e6127549e246) Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-nxmx5 to bootstrap-e2e-minion-group-g05r Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 578.085314ms (578.099087ms including waiting) Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Killing: Stopping container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-tqnn5 to bootstrap-e2e-minion-group-jq3j Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 617.125424ms (617.132787ms including waiting) Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-tqnn5_kube-system(35144846-d770-47bd-9635-2ce65f14a2c4) Jan 28 21:11:41.368: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-fx6jw Jan 28 21:11:41.368: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-tqnn5 Jan 28 21:11:41.368: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-nxmx5 Jan 28 21:11:41.368: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 21:11:41.368: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 21:11:41.368: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 28 21:11:41.368: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://127.0.0.1:8133/healthz": dial tcp 127.0.0.1:8133: connect: connection refused Jan 28 21:11:41.368: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 28 21:11:41.368: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 28 21:11:41.368: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 28 21:11:41.368: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 28 21:11:41.368: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 28 21:11:41.368: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 28 21:11:41.368: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 21:11:41.368: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 28 21:11:41.368: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 28 21:11:41.368: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 28 21:11:41.368: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 28 21:11:41.368: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(f70ce176158303a9ebd031d7e3b6127a) Jan 28 21:11:41.368: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2c884380-0d8c-4b1f-849d-e60b28ae1c8f became leader Jan 28 21:11:41.368: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_75d5164c-e463-455f-9e1a-3bb8a975cbd4 became leader Jan 28 21:11:41.368: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ab3c6bc7-5479-4e73-b234-4f40535396e8 became leader Jan 28 21:11:41.368: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_853ac3a7-82a3-46e7-997a-15e8b0419ae3 became leader Jan 28 21:11:41.368: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ad0d5dff-dfe0-4a81-b527-c10da0dbc2c6 became leader Jan 28 21:11:41.368: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8d4eee79-b6ca-4ae7-b745-24984fc0ea26 became leader Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {default-scheduler } FailedScheduling: 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-8gc49 to bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.293082344s (1.293090978s including waiting) Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container autoscaler Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container autoscaler Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} NetworkNotReady: network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container autoscaler Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container autoscaler Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container autoscaler Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-8gc49_kube-system(62e323eb-96c0-4789-9d04-b84f1884a825) Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-8gc49 Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-8gc49 Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-8gc49 Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-bs1f_kube-system(22272a191c0d024a253f7f4807e9b7a0) Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-bs1f_kube-system(22272a191c0d024a253f7f4807e9b7a0) Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Killing: Stopping container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Killing: Stopping container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-g05r_kube-system(6b09ace535a17263444ad2960f4b8959) Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-jq3j_kube-system(fcf7764eda52c0ab46d9357b02b9fc41) Jan 28 21:11:41.368: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 28 21:11:41.368: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 28 21:11:41.368: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(51babbd1f81b742b53c210ccd4aba348) Jan 28 21:11:41.368: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_4bb17d63-51a5-4714-9ac3-79c98c6cd91e became leader Jan 28 21:11:41.368: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_93d26758-e2b8-479c-8d94-4cbc6d04d199 became leader Jan 28 21:11:41.368: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e17d3031-0f2a-47be-a494-7efe111f6476 became leader Jan 28 21:11:41.368: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_173d4f2f-433b-4a60-94fc-9a55200b0100 became leader Jan 28 21:11:41.368: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_5d80d9e0-ade4-460c-a91f-8ca7cbe3fb84 became leader Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {default-scheduler } FailedScheduling: 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-rlkx5 to bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.450510051s (1.450555981s including waiting) Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container default-http-backend Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container default-http-backend Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} NetworkNotReady: network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container default-http-backend Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-rlkx5 Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container default-http-backend Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-rlkx5 Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Liveness probe failed: Get "http://10.64.2.13:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-rlkx5 Jan 28 21:11:41.368: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 28 21:11:41.368: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 28 21:11:41.368: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 28 21:11:41.368: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 28 21:11:41.368: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-2dsmd to bootstrap-e2e-minion-group-g05r Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 717.875781ms (717.885972ms including waiting) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.675684694s (1.675693368s including waiting) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-2vpw5 to bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 744.62103ms (744.638761ms including waiting) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.836071057s (1.83608501s including waiting) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-hpcd7 to bootstrap-e2e-master Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 658.17817ms (658.185211ms including waiting) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.237505268s (2.237512666s including waiting) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-x44dw to bootstrap-e2e-minion-group-jq3j Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 744.675824ms (744.694272ms including waiting) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.542318307s (1.54232719s including waiting) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-x44dw Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-2dsmd Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-2vpw5 Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-hpcd7 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {default-scheduler } FailedScheduling: 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-82bk2 to bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 5.632375276s (5.632383215s including waiting) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.113371361s (1.11338839s including waiting) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Readiness probe failed: Get "https://10.64.2.2:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-82bk2 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-82bk2 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-gk8n9 to bootstrap-e2e-minion-group-jq3j Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.305614898s (1.305632675s including waiting) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 945.612734ms (945.652382ms including waiting) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": dial tcp 10.64.0.3:10250: connect: connection refused Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": dial tcp 10.64.0.3:10250: connect: connection refused Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Liveness probe failed: Get "https://10.64.0.5:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Container metrics-server failed liveness probe, will be restarted Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": dial tcp 10.64.0.5:10250: connect: connection refused Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-gk8n9 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-gk8n9 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.15:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-gk8n9 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.757791007s (2.757799348s including waiting) Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container volume-snapshot-controller Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container volume-snapshot-controller Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container volume-snapshot-controller Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(e2c01da0-0a7c-4c95-a545-053747d26c71) Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} NetworkNotReady: network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container volume-snapshot-controller Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container volume-snapshot-controller Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container volume-snapshot-controller Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(e2c01da0-0a7c-4c95-a545-053747d26c71) Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 21:11:41.368 (54ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 21:11:41.368 Jan 28 21:11:41.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 21:11:41.414 (46ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 21:11:41.414 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 21:11:41.414 STEP: Collecting events from namespace "reboot-7385". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 21:11:41.414 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/28/23 21:11:41.454 Jan 28 21:11:41.495: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 21:11:41.495: INFO: Jan 28 21:11:41.538: INFO: Logging node info for node bootstrap-e2e-master Jan 28 21:11:41.580: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 5a893541-4edb-4822-b656-8eb749851389 2263 0 2023-01-28 20:52:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 20:52:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-28 20:52:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-28 20:52:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 21:08:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 20:52:27 +0000 UTC,LastTransitionTime:2023-01-28 20:52:27 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:11 +0000 UTC,LastTransitionTime:2023-01-28 20:52:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:11 +0000 UTC,LastTransitionTime:2023-01-28 20:52:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:11 +0000 UTC,LastTransitionTime:2023-01-28 20:52:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 21:08:11 +0000 UTC,LastTransitionTime:2023-01-28 20:52:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.105.32.116,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f814f882cc154157460b3532a03d8644,SystemUUID:f814f882-cc15-4157-460b-3532a03d8644,BootID:6cb4da42-0e9f-4a20-86db-657430266c2b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:57552182,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 21:11:41.581: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 28 21:11:41.627: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 28 21:11:41.686: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-28 20:51:28 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:41.686: INFO: Container kube-apiserver ready: true, restart count 1 Jan 28 21:11:41.686: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-28 20:51:45 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:41.686: INFO: Container kube-addon-manager ready: true, restart count 3 Jan 28 21:11:41.686: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-28 20:51:45 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:41.686: INFO: Container l7-lb-controller ready: true, restart count 7 Jan 28 21:11:41.686: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-28 20:51:27 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:41.686: INFO: Container kube-scheduler ready: true, restart count 4 Jan 28 21:11:41.686: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-28 20:51:28 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:41.686: INFO: Container konnectivity-server-container ready: true, restart count 1 Jan 28 21:11:41.686: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-28 20:51:28 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:41.686: INFO: Container kube-controller-manager ready: false, restart count 6 Jan 28 21:11:41.686: INFO: metadata-proxy-v0.1-hpcd7 started at 2023-01-28 20:52:12 +0000 UTC (0+2 container statuses recorded) Jan 28 21:11:41.686: INFO: Container metadata-proxy ready: true, restart count 0 Jan 28 21:11:41.686: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 28 21:11:41.686: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-28 20:51:27 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:41.686: INFO: Container etcd-container ready: true, restart count 2 Jan 28 21:11:41.686: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-28 20:53:08 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:41.686: INFO: Container etcd-container ready: true, restart count 3 Jan 28 21:11:41.867: INFO: Latency metrics for node bootstrap-e2e-master Jan 28 21:11:41.867: INFO: Logging node info for node bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.909: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-bs1f 08692535-a320-4dcb-91ff-1fa0ba2828d7 2213 0 2023-01-28 20:52:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-bs1f kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 20:52:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 21:01:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-28 21:01:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-28 21:07:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-28 21:07:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce/us-west1-b/bootstrap-e2e-minion-group-bs1f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 20:52:16 +0000 UTC,LastTransitionTime:2023-01-28 20:52:16 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 21:07:31 +0000 UTC,LastTransitionTime:2023-01-28 21:02:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 21:07:31 +0000 UTC,LastTransitionTime:2023-01-28 21:02:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 21:07:31 +0000 UTC,LastTransitionTime:2023-01-28 21:02:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 21:07:31 +0000 UTC,LastTransitionTime:2023-01-28 21:02:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.154.4,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-bs1f.c.k8s-jkns-e2e-gce.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-bs1f.c.k8s-jkns-e2e-gce.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2ea9497a2a9005aa8e5e0f3ffad1e133,SystemUUID:2ea9497a-2a90-05aa-8e5e-0f3ffad1e133,BootID:a193f4d3-2147-447c-861e-3b0aa909997e,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 21:11:41.909: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.954: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-bs1f Jan 28 21:11:42.014: INFO: volume-snapshot-controller-0 started at 2023-01-28 20:52:16 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.014: INFO: Container volume-snapshot-controller ready: false, restart count 8 Jan 28 21:11:42.014: INFO: metadata-proxy-v0.1-2vpw5 started at 2023-01-28 20:52:09 +0000 UTC (0+2 container statuses recorded) Jan 28 21:11:42.014: INFO: Container metadata-proxy ready: true, restart count 1 Jan 28 21:11:42.014: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 28 21:11:42.014: INFO: konnectivity-agent-fx6jw started at 2023-01-28 20:52:17 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.014: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 28 21:11:42.014: INFO: kube-proxy-bootstrap-e2e-minion-group-bs1f started at 2023-01-28 20:52:08 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.014: INFO: Container kube-proxy ready: true, restart count 8 Jan 28 21:11:42.014: INFO: l7-default-backend-8549d69d99-rlkx5 started at 2023-01-28 20:52:16 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.014: INFO: Container default-http-backend ready: true, restart count 2 Jan 28 21:11:42.014: INFO: coredns-6846b5b5f-fhlmc started at 2023-01-28 20:52:16 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.014: INFO: Container coredns ready: false, restart count 7 Jan 28 21:11:42.014: INFO: kube-dns-autoscaler-5f6455f985-8gc49 started at 2023-01-28 20:52:16 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.014: INFO: Container autoscaler ready: false, restart count 6 Jan 28 21:11:42.202: INFO: Latency metrics for node bootstrap-e2e-minion-group-bs1f Jan 28 21:11:42.202: INFO: Logging node info for node bootstrap-e2e-minion-group-g05r Jan 28 21:11:42.245: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-g05r 87185a8f-bb27-450e-89e5-8951dac6f0bd 2650 0 2023-01-28 20:52:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-g05r kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 20:52:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 21:01:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 21:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 21:08:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 21:11:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce/us-west1-b/bootstrap-e2e-minion-group-g05r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 21:11:08 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 21:11:08 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 21:11:08 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 21:11:08 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 21:11:08 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 21:11:08 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 21:11:08 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 20:52:16 +0000 UTC,LastTransitionTime:2023-01-28 20:52:16 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:02:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:02:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:02:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:08:50 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.227.18,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-g05r.c.k8s-jkns-e2e-gce.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-g05r.c.k8s-jkns-e2e-gce.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4bb7737aadc011adf7a719d3300fb8fa,SystemUUID:4bb7737a-adc0-11ad-f7a7-19d3300fb8fa,BootID:64830d10-7653-4a01-b0dd-43c6906fa52f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 21:11:42.245: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-g05r Jan 28 21:11:42.297: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-g05r Jan 28 21:11:42.366: INFO: konnectivity-agent-nxmx5 started at 2023-01-28 20:52:17 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.366: INFO: Container konnectivity-agent ready: false, restart count 1 Jan 28 21:11:42.366: INFO: coredns-6846b5b5f-6s4w8 started at 2023-01-28 20:52:21 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.366: INFO: Container coredns ready: true, restart count 4 Jan 28 21:11:42.366: INFO: kube-proxy-bootstrap-e2e-minion-group-g05r started at 2023-01-28 20:52:07 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.366: INFO: Container kube-proxy ready: true, restart count 6 Jan 28 21:11:42.366: INFO: metadata-proxy-v0.1-2dsmd started at 2023-01-28 20:52:08 +0000 UTC (0+2 container statuses recorded) Jan 28 21:11:42.366: INFO: Container metadata-proxy ready: true, restart count 2 Jan 28 21:11:42.366: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 28 21:11:42.560: INFO: Latency metrics for node bootstrap-e2e-minion-group-g05r Jan 28 21:11:42.560: INFO: Logging node info for node bootstrap-e2e-minion-group-jq3j Jan 28 21:11:42.603: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-jq3j 2b2b9937-135b-4df7-9d57-10f4c3abef5d 2390 0 2023-01-28 20:52:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-jq3j kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 20:52:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 21:05:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-28 21:07:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 21:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 21:08:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce/us-west1-b/bootstrap-e2e-minion-group-jq3j,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 20:52:16 +0000 UTC,LastTransitionTime:2023-01-28 20:52:16 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:08:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:08:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:08:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:08:50 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.247.4.220,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-jq3j.c.k8s-jkns-e2e-gce.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-jq3j.c.k8s-jkns-e2e-gce.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de3b20bd84cffdb49aa767a4d3b2d6b6,SystemUUID:de3b20bd-84cf-fdb4-9aa7-67a4d3b2d6b6,BootID:671de56a-7689-4498-b6c5-8a1a18405efe,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 21:11:42.603: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-jq3j Jan 28 21:11:42.649: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-jq3j Jan 28 21:11:42.729: INFO: kube-proxy-bootstrap-e2e-minion-group-jq3j started at 2023-01-28 20:52:07 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.729: INFO: Container kube-proxy ready: true, restart count 4 Jan 28 21:11:42.729: INFO: metadata-proxy-v0.1-x44dw started at 2023-01-28 20:52:08 +0000 UTC (0+2 container statuses recorded) Jan 28 21:11:42.729: INFO: Container metadata-proxy ready: true, restart count 2 Jan 28 21:11:42.729: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 28 21:11:42.729: INFO: konnectivity-agent-tqnn5 started at 2023-01-28 20:52:17 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.729: INFO: Container konnectivity-agent ready: false, restart count 4 Jan 28 21:11:42.729: INFO: metrics-server-v0.5.2-867b8754b9-gk8n9 started at 2023-01-28 20:52:40 +0000 UTC (0+2 container statuses recorded) Jan 28 21:11:42.729: INFO: Container metrics-server ready: true, restart count 8 Jan 28 21:11:42.729: INFO: Container metrics-server-nanny ready: true, restart count 8 Jan 28 21:11:42.896: INFO: Latency metrics for node bootstrap-e2e-minion-group-jq3j END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 21:11:42.896 (1.482s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 21:11:42.896 (1.482s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 21:11:42.896 STEP: Destroying namespace "reboot-7385" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 21:11:42.896 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 21:11:42.939 (43ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 21:11:42.939 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 21:11:42.939 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] wait for service account "default" in namespace "reboot-7385": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 21:11:41.315from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:09:41.266 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:09:41.266 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:09:41.266 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 21:09:41.266 Jan 28 21:09:41.266: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 21:09:41.267 Jan 28 21:11:41.314: INFO: Unexpected error: <*fmt.wrapError | 0xc00500a000>: { msg: "wait for service account \"default\" in namespace \"reboot-7385\": timed out waiting for the condition", err: <*errors.errorString | 0xc000287c80>{ s: "timed out waiting for the condition", }, } [FAILED] wait for service account "default" in namespace "reboot-7385": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 21:11:41.315 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:11:41.315 (2m0.049s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 21:11:41.315 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 21:11:41.315 Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-6s4w8 to bootstrap-e2e-minion-group-g05r Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 959.482242ms (959.515491ms including waiting) Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container coredns Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container coredns Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container coredns Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container coredns Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Killing: Stopping container coredns Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-6s4w8_kube-system(924cbee6-6cd1-4108-a373-011bb84d0d00) Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Unhealthy: Readiness probe failed: Get "http://10.64.1.6:8181/ready": dial tcp 10.64.1.6:8181: connect: connection refused Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.367: INFO: event for coredns-6846b5b5f-6s4w8: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-6s4w8 Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Unhealthy: Liveness probe failed: Get "http://10.64.1.6:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Unhealthy: Readiness probe failed: Get "http://10.64.1.6:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container coredns Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container coredns Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {default-scheduler } FailedScheduling: 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-fhlmc to bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.845509128s (3.845522943s including waiting) Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container coredns Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container coredns Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container coredns Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Readiness probe failed: Get "http://10.64.2.7:8181/ready": dial tcp 10.64.2.7:8181: connect: connection refused Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} NetworkNotReady: network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} FailedMount: MountVolume.SetUp failed for volume "config-volume" : object "kube-system"/"coredns" not registered Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container coredns Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container coredns Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container coredns Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-fhlmc Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Readiness probe failed: Get "http://10.64.2.17:8181/ready": dial tcp 10.64.2.17:8181: connect: connection refused Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-fhlmc_kube-system(05e23121-6d9c-4eff-9475-84347fef8c9a) Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Readiness probe failed: Get "http://10.64.2.20:8181/ready": dial tcp 10.64.2.20:8181: connect: connection refused Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f-fhlmc: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-fhlmc Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-fhlmc Jan 28 21:11:41.368: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-6s4w8 Jan 28 21:11:41.368: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 21:11:41.368: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 21:11:41.368: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 21:11:41.368: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 21:11:41.368: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 21:11:41.368: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 28 21:11:41.368: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 21:11:41.368: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(29ec3e483e58679ee5f59a6031c5e501) Jan 28 21:11:41.368: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 21:11:41.368: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 21:11:41.368: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 21:11:41.368: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 21:11:41.368: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(f4f6d281abb01fd97fbab9898b841ee8) Jan 28 21:11:41.368: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_6915b became leader Jan 28 21:11:41.368: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b0b3b became leader Jan 28 21:11:41.368: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_956fb became leader Jan 28 21:11:41.368: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1c8a7 became leader Jan 28 21:11:41.368: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_81052 became leader Jan 28 21:11:41.368: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_d293d became leader Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-fx6jw to bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.49666404s (1.496680076s including waiting) Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} NetworkNotReady: network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-fx6jw_kube-system(904c0f67-24cb-4230-b7fd-e6127549e246) Jan 28 21:11:41.368: INFO: event for konnectivity-agent-fx6jw: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-nxmx5 to bootstrap-e2e-minion-group-g05r Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 578.085314ms (578.099087ms including waiting) Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Killing: Stopping container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-tqnn5 to bootstrap-e2e-minion-group-jq3j Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 617.125424ms (617.132787ms including waiting) Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container konnectivity-agent Jan 28 21:11:41.368: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-tqnn5_kube-system(35144846-d770-47bd-9635-2ce65f14a2c4) Jan 28 21:11:41.368: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-fx6jw Jan 28 21:11:41.368: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-tqnn5 Jan 28 21:11:41.368: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-nxmx5 Jan 28 21:11:41.368: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 21:11:41.368: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 21:11:41.368: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 28 21:11:41.368: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://127.0.0.1:8133/healthz": dial tcp 127.0.0.1:8133: connect: connection refused Jan 28 21:11:41.368: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 28 21:11:41.368: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 28 21:11:41.368: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 28 21:11:41.368: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 28 21:11:41.368: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 28 21:11:41.368: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 28 21:11:41.368: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 21:11:41.368: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 28 21:11:41.368: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 28 21:11:41.368: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 28 21:11:41.368: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 28 21:11:41.368: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(f70ce176158303a9ebd031d7e3b6127a) Jan 28 21:11:41.368: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2c884380-0d8c-4b1f-849d-e60b28ae1c8f became leader Jan 28 21:11:41.368: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_75d5164c-e463-455f-9e1a-3bb8a975cbd4 became leader Jan 28 21:11:41.368: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ab3c6bc7-5479-4e73-b234-4f40535396e8 became leader Jan 28 21:11:41.368: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_853ac3a7-82a3-46e7-997a-15e8b0419ae3 became leader Jan 28 21:11:41.368: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ad0d5dff-dfe0-4a81-b527-c10da0dbc2c6 became leader Jan 28 21:11:41.368: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8d4eee79-b6ca-4ae7-b745-24984fc0ea26 became leader Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {default-scheduler } FailedScheduling: 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-8gc49 to bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.293082344s (1.293090978s including waiting) Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container autoscaler Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container autoscaler Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} NetworkNotReady: network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container autoscaler Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container autoscaler Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container autoscaler Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-8gc49_kube-system(62e323eb-96c0-4789-9d04-b84f1884a825) Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-8gc49 Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-8gc49 Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-8gc49 Jan 28 21:11:41.368: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-bs1f_kube-system(22272a191c0d024a253f7f4807e9b7a0) Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-bs1f_kube-system(22272a191c0d024a253f7f4807e9b7a0) Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Killing: Stopping container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Killing: Stopping container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-g05r_kube-system(6b09ace535a17263444ad2960f4b8959) Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container kube-proxy Jan 28 21:11:41.368: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-jq3j_kube-system(fcf7764eda52c0ab46d9357b02b9fc41) Jan 28 21:11:41.368: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:11:41.368: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 28 21:11:41.368: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 28 21:11:41.368: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(51babbd1f81b742b53c210ccd4aba348) Jan 28 21:11:41.368: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_4bb17d63-51a5-4714-9ac3-79c98c6cd91e became leader Jan 28 21:11:41.368: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_93d26758-e2b8-479c-8d94-4cbc6d04d199 became leader Jan 28 21:11:41.368: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e17d3031-0f2a-47be-a494-7efe111f6476 became leader Jan 28 21:11:41.368: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_173d4f2f-433b-4a60-94fc-9a55200b0100 became leader Jan 28 21:11:41.368: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_5d80d9e0-ade4-460c-a91f-8ca7cbe3fb84 became leader Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {default-scheduler } FailedScheduling: 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-rlkx5 to bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.450510051s (1.450555981s including waiting) Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container default-http-backend Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container default-http-backend Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} NetworkNotReady: network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container default-http-backend Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-rlkx5 Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container default-http-backend Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-rlkx5 Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Liveness probe failed: Get "http://10.64.2.13:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 28 21:11:41.368: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-rlkx5 Jan 28 21:11:41.368: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 28 21:11:41.368: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 28 21:11:41.368: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 28 21:11:41.368: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 28 21:11:41.368: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-2dsmd to bootstrap-e2e-minion-group-g05r Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 717.875781ms (717.885972ms including waiting) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.675684694s (1.675693368s including waiting) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-2vpw5 to bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 744.62103ms (744.638761ms including waiting) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.836071057s (1.83608501s including waiting) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-2vpw5: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-hpcd7 to bootstrap-e2e-master Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 658.17817ms (658.185211ms including waiting) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.237505268s (2.237512666s including waiting) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-x44dw to bootstrap-e2e-minion-group-jq3j Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 744.675824ms (744.694272ms including waiting) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.542318307s (1.54232719s including waiting) Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metadata-proxy Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container prometheus-to-sd-exporter Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-x44dw Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-2dsmd Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-2vpw5 Jan 28 21:11:41.368: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-hpcd7 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {default-scheduler } FailedScheduling: 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-82bk2 to bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 5.632375276s (5.632383215s including waiting) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.113371361s (1.11338839s including waiting) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Readiness probe failed: Get "https://10.64.2.2:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-82bk2 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-82bk2 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-gk8n9 to bootstrap-e2e-minion-group-jq3j Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.305614898s (1.305632675s including waiting) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 945.612734ms (945.652382ms including waiting) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": dial tcp 10.64.0.3:10250: connect: connection refused Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": dial tcp 10.64.0.3:10250: connect: connection refused Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Liveness probe failed: Get "https://10.64.0.5:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Container metrics-server failed liveness probe, will be restarted Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": dial tcp 10.64.0.5:10250: connect: connection refused Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-gk8n9 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-gk8n9 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server-nanny Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.15:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-gk8n9 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 28 21:11:41.368: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.757791007s (2.757799348s including waiting) Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container volume-snapshot-controller Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container volume-snapshot-controller Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container volume-snapshot-controller Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(e2c01da0-0a7c-4c95-a545-053747d26c71) Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} NetworkNotReady: network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container volume-snapshot-controller Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container volume-snapshot-controller Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container volume-snapshot-controller Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(e2c01da0-0a7c-4c95-a545-053747d26c71) Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 28 21:11:41.368: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 21:11:41.368 (54ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 21:11:41.368 Jan 28 21:11:41.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 21:11:41.414 (46ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 21:11:41.414 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 21:11:41.414 STEP: Collecting events from namespace "reboot-7385". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 21:11:41.414 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/28/23 21:11:41.454 Jan 28 21:11:41.495: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 21:11:41.495: INFO: Jan 28 21:11:41.538: INFO: Logging node info for node bootstrap-e2e-master Jan 28 21:11:41.580: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 5a893541-4edb-4822-b656-8eb749851389 2263 0 2023-01-28 20:52:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 20:52:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-28 20:52:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-28 20:52:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 21:08:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 20:52:27 +0000 UTC,LastTransitionTime:2023-01-28 20:52:27 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:11 +0000 UTC,LastTransitionTime:2023-01-28 20:52:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:11 +0000 UTC,LastTransitionTime:2023-01-28 20:52:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:11 +0000 UTC,LastTransitionTime:2023-01-28 20:52:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 21:08:11 +0000 UTC,LastTransitionTime:2023-01-28 20:52:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.105.32.116,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f814f882cc154157460b3532a03d8644,SystemUUID:f814f882-cc15-4157-460b-3532a03d8644,BootID:6cb4da42-0e9f-4a20-86db-657430266c2b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:57552182,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 21:11:41.581: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 28 21:11:41.627: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 28 21:11:41.686: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-28 20:51:28 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:41.686: INFO: Container kube-apiserver ready: true, restart count 1 Jan 28 21:11:41.686: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-28 20:51:45 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:41.686: INFO: Container kube-addon-manager ready: true, restart count 3 Jan 28 21:11:41.686: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-28 20:51:45 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:41.686: INFO: Container l7-lb-controller ready: true, restart count 7 Jan 28 21:11:41.686: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-28 20:51:27 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:41.686: INFO: Container kube-scheduler ready: true, restart count 4 Jan 28 21:11:41.686: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-28 20:51:28 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:41.686: INFO: Container konnectivity-server-container ready: true, restart count 1 Jan 28 21:11:41.686: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-28 20:51:28 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:41.686: INFO: Container kube-controller-manager ready: false, restart count 6 Jan 28 21:11:41.686: INFO: metadata-proxy-v0.1-hpcd7 started at 2023-01-28 20:52:12 +0000 UTC (0+2 container statuses recorded) Jan 28 21:11:41.686: INFO: Container metadata-proxy ready: true, restart count 0 Jan 28 21:11:41.686: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 28 21:11:41.686: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-28 20:51:27 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:41.686: INFO: Container etcd-container ready: true, restart count 2 Jan 28 21:11:41.686: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-28 20:53:08 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:41.686: INFO: Container etcd-container ready: true, restart count 3 Jan 28 21:11:41.867: INFO: Latency metrics for node bootstrap-e2e-master Jan 28 21:11:41.867: INFO: Logging node info for node bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.909: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-bs1f 08692535-a320-4dcb-91ff-1fa0ba2828d7 2213 0 2023-01-28 20:52:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-bs1f kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 20:52:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 21:01:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-28 21:01:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-28 21:07:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-28 21:07:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce/us-west1-b/bootstrap-e2e-minion-group-bs1f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 20:52:16 +0000 UTC,LastTransitionTime:2023-01-28 20:52:16 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 21:07:31 +0000 UTC,LastTransitionTime:2023-01-28 21:02:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 21:07:31 +0000 UTC,LastTransitionTime:2023-01-28 21:02:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 21:07:31 +0000 UTC,LastTransitionTime:2023-01-28 21:02:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 21:07:31 +0000 UTC,LastTransitionTime:2023-01-28 21:02:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.154.4,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-bs1f.c.k8s-jkns-e2e-gce.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-bs1f.c.k8s-jkns-e2e-gce.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2ea9497a2a9005aa8e5e0f3ffad1e133,SystemUUID:2ea9497a-2a90-05aa-8e5e-0f3ffad1e133,BootID:a193f4d3-2147-447c-861e-3b0aa909997e,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 21:11:41.909: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-bs1f Jan 28 21:11:41.954: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-bs1f Jan 28 21:11:42.014: INFO: volume-snapshot-controller-0 started at 2023-01-28 20:52:16 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.014: INFO: Container volume-snapshot-controller ready: false, restart count 8 Jan 28 21:11:42.014: INFO: metadata-proxy-v0.1-2vpw5 started at 2023-01-28 20:52:09 +0000 UTC (0+2 container statuses recorded) Jan 28 21:11:42.014: INFO: Container metadata-proxy ready: true, restart count 1 Jan 28 21:11:42.014: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 28 21:11:42.014: INFO: konnectivity-agent-fx6jw started at 2023-01-28 20:52:17 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.014: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 28 21:11:42.014: INFO: kube-proxy-bootstrap-e2e-minion-group-bs1f started at 2023-01-28 20:52:08 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.014: INFO: Container kube-proxy ready: true, restart count 8 Jan 28 21:11:42.014: INFO: l7-default-backend-8549d69d99-rlkx5 started at 2023-01-28 20:52:16 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.014: INFO: Container default-http-backend ready: true, restart count 2 Jan 28 21:11:42.014: INFO: coredns-6846b5b5f-fhlmc started at 2023-01-28 20:52:16 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.014: INFO: Container coredns ready: false, restart count 7 Jan 28 21:11:42.014: INFO: kube-dns-autoscaler-5f6455f985-8gc49 started at 2023-01-28 20:52:16 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.014: INFO: Container autoscaler ready: false, restart count 6 Jan 28 21:11:42.202: INFO: Latency metrics for node bootstrap-e2e-minion-group-bs1f Jan 28 21:11:42.202: INFO: Logging node info for node bootstrap-e2e-minion-group-g05r Jan 28 21:11:42.245: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-g05r 87185a8f-bb27-450e-89e5-8951dac6f0bd 2650 0 2023-01-28 20:52:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-g05r kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 20:52:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 21:01:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 21:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 21:08:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 21:11:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce/us-west1-b/bootstrap-e2e-minion-group-g05r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 21:11:08 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 21:11:08 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 21:11:08 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 21:11:08 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 21:11:08 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 21:11:08 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 21:11:08 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 20:52:16 +0000 UTC,LastTransitionTime:2023-01-28 20:52:16 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:02:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:02:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:02:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:08:50 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.227.18,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-g05r.c.k8s-jkns-e2e-gce.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-g05r.c.k8s-jkns-e2e-gce.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4bb7737aadc011adf7a719d3300fb8fa,SystemUUID:4bb7737a-adc0-11ad-f7a7-19d3300fb8fa,BootID:64830d10-7653-4a01-b0dd-43c6906fa52f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 21:11:42.245: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-g05r Jan 28 21:11:42.297: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-g05r Jan 28 21:11:42.366: INFO: konnectivity-agent-nxmx5 started at 2023-01-28 20:52:17 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.366: INFO: Container konnectivity-agent ready: false, restart count 1 Jan 28 21:11:42.366: INFO: coredns-6846b5b5f-6s4w8 started at 2023-01-28 20:52:21 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.366: INFO: Container coredns ready: true, restart count 4 Jan 28 21:11:42.366: INFO: kube-proxy-bootstrap-e2e-minion-group-g05r started at 2023-01-28 20:52:07 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.366: INFO: Container kube-proxy ready: true, restart count 6 Jan 28 21:11:42.366: INFO: metadata-proxy-v0.1-2dsmd started at 2023-01-28 20:52:08 +0000 UTC (0+2 container statuses recorded) Jan 28 21:11:42.366: INFO: Container metadata-proxy ready: true, restart count 2 Jan 28 21:11:42.366: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 28 21:11:42.560: INFO: Latency metrics for node bootstrap-e2e-minion-group-g05r Jan 28 21:11:42.560: INFO: Logging node info for node bootstrap-e2e-minion-group-jq3j Jan 28 21:11:42.603: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-jq3j 2b2b9937-135b-4df7-9d57-10f4c3abef5d 2390 0 2023-01-28 20:52:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-jq3j kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 20:52:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 21:05:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-28 21:07:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 21:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 21:08:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce/us-west1-b/bootstrap-e2e-minion-group-jq3j,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 20:52:16 +0000 UTC,LastTransitionTime:2023-01-28 20:52:16 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:08:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:08:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:08:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:08:50 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.247.4.220,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-jq3j.c.k8s-jkns-e2e-gce.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-jq3j.c.k8s-jkns-e2e-gce.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de3b20bd84cffdb49aa767a4d3b2d6b6,SystemUUID:de3b20bd-84cf-fdb4-9aa7-67a4d3b2d6b6,BootID:671de56a-7689-4498-b6c5-8a1a18405efe,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 21:11:42.603: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-jq3j Jan 28 21:11:42.649: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-jq3j Jan 28 21:11:42.729: INFO: kube-proxy-bootstrap-e2e-minion-group-jq3j started at 2023-01-28 20:52:07 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.729: INFO: Container kube-proxy ready: true, restart count 4 Jan 28 21:11:42.729: INFO: metadata-proxy-v0.1-x44dw started at 2023-01-28 20:52:08 +0000 UTC (0+2 container statuses recorded) Jan 28 21:11:42.729: INFO: Container metadata-proxy ready: true, restart count 2 Jan 28 21:11:42.729: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 28 21:11:42.729: INFO: konnectivity-agent-tqnn5 started at 2023-01-28 20:52:17 +0000 UTC (0+1 container statuses recorded) Jan 28 21:11:42.729: INFO: Container konnectivity-agent ready: false, restart count 4 Jan 28 21:11:42.729: INFO: metrics-server-v0.5.2-867b8754b9-gk8n9 started at 2023-01-28 20:52:40 +0000 UTC (0+2 container statuses recorded) Jan 28 21:11:42.729: INFO: Container metrics-server ready: true, restart count 8 Jan 28 21:11:42.729: INFO: Container metrics-server-nanny ready: true, restart count 8 Jan 28 21:11:42.896: INFO: Latency metrics for node bootstrap-e2e-minion-group-jq3j END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 21:11:42.896 (1.482s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 21:11:42.896 (1.482s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 21:11:42.896 STEP: Destroying namespace "reboot-7385" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 21:11:42.896 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 21:11:42.939 (43ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 21:11:42.939 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 21:11:42.939 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 21:09:39.587from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:02:32.913 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:02:32.913 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:02:32.913 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 21:02:32.913 Jan 28 21:02:32.913: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 21:02:32.914 Jan 28 21:02:32.953: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:34.993: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:36.994: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:38.995: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:40.993: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:42.993: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:44.993: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:46.993: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:48.995: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:50.993: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:52.993: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:54.994: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:56.993: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:58.995: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:03:00.994: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 21:04:38.819 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 21:04:38.899 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:04:38.981 (2m6.068s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 21:04:38.981 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 21:04:38.981 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/28/23 21:04:38.981 Jan 28 21:04:39.166: INFO: Getting bootstrap-e2e-minion-group-jq3j Jan 28 21:04:39.166: INFO: Getting bootstrap-e2e-minion-group-bs1f Jan 28 21:04:39.166: INFO: Getting bootstrap-e2e-minion-group-g05r Jan 28 21:04:39.208: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-jq3j condition Ready to be true Jan 28 21:04:39.226: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-g05r condition Ready to be true Jan 28 21:04:39.226: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-bs1f condition Ready to be true Jan 28 21:04:39.250: INFO: Node bootstrap-e2e-minion-group-jq3j has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:04:39.250: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:04:39.250: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-x44dw" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:04:39.250: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jq3j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:04:39.269: INFO: Node bootstrap-e2e-minion-group-bs1f has 4 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-bs1f metadata-proxy-v0.1-2vpw5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8gc49] Jan 28 21:04:39.269: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-bs1f metadata-proxy-v0.1-2vpw5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8gc49] Jan 28 21:04:39.269: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-8gc49" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:04:39.269: INFO: Node bootstrap-e2e-minion-group-g05r has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-g05r metadata-proxy-v0.1-2dsmd] Jan 28 21:04:39.269: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-g05r metadata-proxy-v0.1-2dsmd] Jan 28 21:04:39.269: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:04:39.269: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-bs1f" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:04:39.269: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-2vpw5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:04:39.269: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-2dsmd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:04:39.269: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-g05r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:04:39.293: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jq3j": Phase="Running", Reason="", readiness=true. Elapsed: 42.866809ms Jan 28 21:04:39.293: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jq3j" satisfied condition "running and ready, or succeeded" Jan 28 21:04:39.293: INFO: Pod "metadata-proxy-v0.1-x44dw": Phase="Running", Reason="", readiness=true. Elapsed: 42.979318ms Jan 28 21:04:39.293: INFO: Pod "metadata-proxy-v0.1-x44dw" satisfied condition "running and ready, or succeeded" Jan 28 21:04:39.293: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:04:39.293: INFO: Getting external IP address for bootstrap-e2e-minion-group-jq3j Jan 28 21:04:39.293: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-jq3j(35.247.4.220:22) Jan 28 21:04:39.316: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.83529ms Jan 28 21:04:39.316: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:39.316: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 47.279968ms Jan 28 21:04:39.316: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:39.326: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 56.248504ms Jan 28 21:04:39.326: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:39.326: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g05r": Phase="Running", Reason="", readiness=true. Elapsed: 56.145454ms Jan 28 21:04:39.326: INFO: Pod "metadata-proxy-v0.1-2vpw5": Phase="Running", Reason="", readiness=true. Elapsed: 56.314802ms Jan 28 21:04:39.326: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g05r" satisfied condition "running and ready, or succeeded" Jan 28 21:04:39.326: INFO: Pod "metadata-proxy-v0.1-2vpw5" satisfied condition "running and ready, or succeeded" Jan 28 21:04:39.326: INFO: Pod "metadata-proxy-v0.1-2dsmd": Phase="Running", Reason="", readiness=true. Elapsed: 56.235743ms Jan 28 21:04:39.326: INFO: Pod "metadata-proxy-v0.1-2dsmd" satisfied condition "running and ready, or succeeded" Jan 28 21:04:39.326: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-g05r metadata-proxy-v0.1-2dsmd] Jan 28 21:04:39.326: INFO: Getting external IP address for bootstrap-e2e-minion-group-g05r Jan 28 21:04:39.326: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-g05r(34.168.227.18:22) Jan 28 21:04:39.817: INFO: ssh prow@35.247.4.220:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 28 21:04:39.817: INFO: ssh prow@35.247.4.220:22: stdout: "" Jan 28 21:04:39.817: INFO: ssh prow@35.247.4.220:22: stderr: "" Jan 28 21:04:39.817: INFO: ssh prow@35.247.4.220:22: exit code: 0 Jan 28 21:04:39.817: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-jq3j condition Ready to be false Jan 28 21:04:39.845: INFO: ssh prow@34.168.227.18:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 28 21:04:39.845: INFO: ssh prow@34.168.227.18:22: stdout: "" Jan 28 21:04:39.845: INFO: ssh prow@34.168.227.18:22: stderr: "" Jan 28 21:04:39.845: INFO: ssh prow@34.168.227.18:22: exit code: 0 Jan 28 21:04:39.845: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-g05r condition Ready to be false Jan 28 21:04:39.860: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:39.887: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:41.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.089445547s Jan 28 21:04:41.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:41.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2.090141645s Jan 28 21:04:41.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:41.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 2.098976603s Jan 28 21:04:41.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:41.904: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:41.930: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:43.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4.091467339s Jan 28 21:04:43.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:43.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.092214783s Jan 28 21:04:43.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:43.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 4.099794909s Jan 28 21:04:43.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:43.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:43.977: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:45.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 6.090473329s Jan 28 21:04:45.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:45.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.091214639s Jan 28 21:04:45.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:45.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 6.098745952s Jan 28 21:04:45.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:45.992: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:46.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:47.362: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 8.093046738s Jan 28 21:04:47.362: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:47.363: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.09387138s Jan 28 21:04:47.363: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:47.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 8.099295564s Jan 28 21:04:47.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:48.040: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:48.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:49.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 10.090431674s Jan 28 21:04:49.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:49.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.091437385s Jan 28 21:04:49.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:49.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 10.098833473s Jan 28 21:04:49.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:50.084: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:50.106: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:51.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 12.091098046s Jan 28 21:04:51.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:51.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.091996832s Jan 28 21:04:51.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:51.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 12.098950705s Jan 28 21:04:51.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:52.127: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:52.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:53.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 14.08986461s Jan 28 21:04:53.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.089459024s Jan 28 21:04:53.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:53.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:53.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 14.098782589s Jan 28 21:04:53.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:54.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:54.193: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:55.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 16.089798025s Jan 28 21:04:55.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:55.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.090818982s Jan 28 21:04:55.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:55.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 16.099540563s Jan 28 21:04:55.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:56.215: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:56.238: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:57.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 18.090276537s Jan 28 21:04:57.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:57.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.090009448s Jan 28 21:04:57.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:57.372: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 18.103146314s Jan 28 21:04:57.372: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:58.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:58.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:59.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.089459195s Jan 28 21:04:59.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 20.089883633s Jan 28 21:04:59.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:59.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:59.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 20.099360903s Jan 28 21:04:59.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:00.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:00.323: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:01.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 22.090450186s Jan 28 21:05:01.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.090066616s Jan 28 21:05:01.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:01.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:01.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 22.099367774s Jan 28 21:05:01.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:02.344: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:02.366: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:03.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 24.089887562s Jan 28 21:05:03.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:03.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.090739035s Jan 28 21:05:03.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:03.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 24.099416185s Jan 28 21:05:03.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:04.387: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:04.409: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:05.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 26.090266602s Jan 28 21:05:05.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:05.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.091141323s Jan 28 21:05:05.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:05.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 26.098503319s Jan 28 21:05:05.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:06.430: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:06.451: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:07.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.092282831s Jan 28 21:05:07.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:07.362: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 28.092792387s Jan 28 21:05:07.362: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:07.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 28.100202904s Jan 28 21:05:07.370: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:08.473: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:08.494: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:09.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 30.09062821s Jan 28 21:05:09.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:09.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.091441154s Jan 28 21:05:09.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:09.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 30.099420216s Jan 28 21:05:09.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:10.515: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:10.536: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:11.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 32.090333063s Jan 28 21:05:11.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:11.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.091288471s Jan 28 21:05:11.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:11.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 32.098896117s Jan 28 21:05:11.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:12.558: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:12.579: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:13.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.089362161s Jan 28 21:05:13.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:13.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 34.089881214s Jan 28 21:05:13.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:13.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 34.099427127s Jan 28 21:05:13.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:14.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:14.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:15.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 36.089877764s Jan 28 21:05:15.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:15.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.090569192s Jan 28 21:05:15.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:15.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 36.099722934s Jan 28 21:05:15.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:16.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:16.665: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:17.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.090773547s Jan 28 21:05:17.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:17.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 38.091317626s Jan 28 21:05:17.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:17.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 38.099748863s Jan 28 21:05:17.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:18.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:18.708: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:19.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 40.090309825s Jan 28 21:05:19.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:19.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.091386375s Jan 28 21:05:19.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:19.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 40.098479206s Jan 28 21:05:19.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:20.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:20.753: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:21.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 42.090453708s Jan 28 21:05:21.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:21.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.091367565s Jan 28 21:05:21.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:21.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 42.098967666s Jan 28 21:05:21.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:22.770: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:22.796: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:23.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 44.09037615s Jan 28 21:05:23.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:23.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.091032037s Jan 28 21:05:23.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:23.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 44.099394502s Jan 28 21:05:23.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:24.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:24.840: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:25.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 46.090422349s Jan 28 21:05:25.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:25.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.091552937s Jan 28 21:05:25.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:25.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 46.099376833s Jan 28 21:05:25.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:26.857: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:26.882: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:27.361: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 48.09189296s Jan 28 21:05:27.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:27.362: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.092955474s Jan 28 21:05:27.362: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:27.372: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 48.103150852s Jan 28 21:05:27.372: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:28.899: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-jq3j condition Ready to be true Jan 28 21:05:28.926: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:28.944: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:29.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 50.090023319s Jan 28 21:05:29.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:29.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.091076379s Jan 28 21:05:29.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:29.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 50.099005877s Jan 28 21:05:29.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:30.979: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:30.986: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:31.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 52.091417871s Jan 28 21:05:31.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:31.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.092274322s Jan 28 21:05:31.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:31.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 52.098993504s Jan 28 21:05:31.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:33.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:33.029: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:33.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 54.089963629s Jan 28 21:05:33.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:33.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.090880425s Jan 28 21:05:33.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:33.367: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 54.098191553s Jan 28 21:05:33.367: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:35.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:35.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:35.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 56.089821499s Jan 28 21:05:35.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:35.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.091011264s Jan 28 21:05:35.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:35.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 56.098693614s Jan 28 21:05:35.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:37.128: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:37.147: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:37.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 58.091150661s Jan 28 21:05:37.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:37.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.092112717s Jan 28 21:05:37.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:37.370: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=true. Elapsed: 58.100271929s Jan 28 21:05:37.370: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f" satisfied condition "running and ready, or succeeded" Jan 28 21:05:39.172: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:39.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:39.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.089891342s Jan 28 21:05:39.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:39.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.09040255s Jan 28 21:05:39.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:41.214: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:41.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:41.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.090192749s Jan 28 21:05:41.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:41.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.091224351s Jan 28 21:05:41.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:43.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:43.278: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:43.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.090552905s Jan 28 21:05:43.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:43.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.09175854s Jan 28 21:05:43.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:45.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:45.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:45.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.090294952s Jan 28 21:05:45.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.089890119s Jan 28 21:05:45.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:45.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:47.344: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:47.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.089685023s Jan 28 21:05:47.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:47.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.090563036s Jan 28 21:05:47.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:47.364: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:49.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.08964607s Jan 28 21:05:49.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:49.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.090042602s Jan 28 21:05:49.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:49.386: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:49.407: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:51.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.089613599s Jan 28 21:05:51.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:51.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.090404168s Jan 28 21:05:51.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:51.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:51.450: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:53.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.090066034s Jan 28 21:05:53.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:53.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.090899929s Jan 28 21:05:53.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:53.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:53.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:55.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.089696714s Jan 28 21:05:55.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:55.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.09051931s Jan 28 21:05:55.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:55.514: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:55.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:57.390: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.120881101s Jan 28 21:05:57.390: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:57.391: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.12170562s Jan 28 21:05:57.391: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:57.558: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:57.578: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:59.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.090123345s Jan 28 21:05:59.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:59.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.090581363s Jan 28 21:05:59.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:59.601: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:59.621: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:01.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.090445226s Jan 28 21:06:01.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.090034344s Jan 28 21:06:01.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:01.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:01.643: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:01.663: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:03.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.090421984s Jan 28 21:06:03.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:03.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.091297831s Jan 28 21:06:03.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:03.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:03.706: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:05.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.089658901s Jan 28 21:06:05.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:05.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.090124621s Jan 28 21:06:05.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:05.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:05.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:07.358: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.089213046s Jan 28 21:06:07.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.08962737s Jan 28 21:06:07.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:07.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:07.773: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:07.793: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:09.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.09034538s Jan 28 21:06:09.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:09.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.090904084s Jan 28 21:06:09.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:09.815: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:09.837: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:11.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.090011953s Jan 28 21:06:11.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:11.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.091223755s Jan 28 21:06:11.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:11.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:11.881: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:13.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.089602048s Jan 28 21:06:13.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:13.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.090654129s Jan 28 21:06:13.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:13.901: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:13.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:15.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 1m36.089417793s Jan 28 21:06:15.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.089823333s Jan 28 21:06:15.359: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 21:06:15.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:15.943: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:15.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:17.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.089964177s Jan 28 21:06:17.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:18.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:18.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:19.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.090054693s Jan 28 21:06:19.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:20.068: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:20.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:21.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.089550071s Jan 28 21:06:21.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:22.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:22.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:23.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.089538394s Jan 28 21:06:23.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:24.158: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:24.158: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:25.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.088923604s Jan 28 21:06:25.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:26.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:26.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:27.362: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.093350406s Jan 28 21:06:27.362: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:28.246: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:28.246: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:29.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.089929354s Jan 28 21:06:29.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:30.289: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:30.289: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:31.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.089614038s Jan 28 21:06:31.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:32.333: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:32.333: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:33.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.0890961s Jan 28 21:06:33.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:34.378: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:34.378: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:35.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.088988114s Jan 28 21:06:35.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:36.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:36.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:37.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.090838592s Jan 28 21:06:37.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:38.466: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:38.466: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:39.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.089243372s Jan 28 21:06:39.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:40.466: INFO: Node bootstrap-e2e-minion-group-g05r didn't reach desired Ready condition status (false) within 2m0s Jan 28 21:06:40.508: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:41.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.089087218s Jan 28 21:06:41.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:42.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:43.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.089280226s Jan 28 21:06:43.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:44.595: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:45.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.088907272s Jan 28 21:06:45.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:46.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:47.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.091282924s Jan 28 21:06:47.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:48.682: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:49.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.089377694s Jan 28 21:06:49.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:50.724: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:51.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.089931828s Jan 28 21:06:51.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:52.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:53.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.089912804s Jan 28 21:06:53.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:54.811: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:55.357: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.088541063s Jan 28 21:06:55.357: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:56.854: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:57.362: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.093365649s Jan 28 21:06:57.362: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:58.896: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:59.357: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.088652278s Jan 28 21:06:59.357: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:00.939: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:01.357: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.088718895s Jan 28 21:07:01.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:02.981: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:03.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.089664516s Jan 28 21:07:03.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:05.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:05.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.089482809s Jan 28 21:07:05.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:07.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:07.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.091273248s Jan 28 21:07:07.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:09.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:09.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.089181225s Jan 28 21:07:09.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:11.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:11.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.089575021s Jan 28 21:07:11.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:13.207: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:13.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.089615639s Jan 28 21:07:13.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:15.250: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:15.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.089067176s Jan 28 21:07:15.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:17.295: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:17.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.091029085s Jan 28 21:07:17.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:19.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:19.357: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.088631014s Jan 28 21:07:19.357: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:21.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.090232179s Jan 28 21:07:21.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:21.381: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:23.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.089095586s Jan 28 21:07:23.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:23.423: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:25.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.089083414s Jan 28 21:07:25.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:25.467: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:27.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.091072048s Jan 28 21:07:27.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:27.510: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:29.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m50.089925684s Jan 28 21:07:29.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:29.553: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:31.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m52.089404646s Jan 28 21:07:31.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:31.596: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:33.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m54.088807072s Jan 28 21:07:33.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:33.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:35.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m56.089258529s Jan 28 21:07:35.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:35.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:37.361: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m58.092154924s Jan 28 21:07:37.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:37.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:39.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m0.089725293s Jan 28 21:07:39.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:39.768: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:41.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m2.088894447s Jan 28 21:07:41.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:41.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:43.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m4.089021804s Jan 28 21:07:43.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:43.855: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:45.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m6.088751637s Jan 28 21:07:45.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:45.904: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:47.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m8.089778001s Jan 28 21:07:47.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:47.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:49.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m10.089586503s Jan 28 21:07:49.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:50.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:51.357: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.088695125s Jan 28 21:07:51.357: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:52.077: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:53.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.08929565s Jan 28 21:07:53.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:54.120: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:55.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.089279155s Jan 28 21:07:55.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:56.163: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:57.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.090738864s Jan 28 21:07:57.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:58.206: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:59.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.089507207s Jan 28 21:07:59.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:00.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:01.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.089735021s Jan 28 21:08:01.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:02.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:03.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.088883218s Jan 28 21:08:03.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:04.335: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:05.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.088897325s Jan 28 21:08:05.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:06.377: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:07.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.089663138s Jan 28 21:08:07.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:08.421: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:09.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.089378573s Jan 28 21:08:09.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:10.463: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:11.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.089021658s Jan 28 21:08:11.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:12.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:13.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.08914038s Jan 28 21:08:13.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:14.550: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:15.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.08885245s Jan 28 21:08:15.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:16.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:17.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.090598152s Jan 28 21:08:17.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:18.635: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:19.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.089346438s Jan 28 21:08:19.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:20.681: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:21.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.090291378s Jan 28 21:08:21.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:22.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:23.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.089137322s Jan 28 21:08:23.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:24.768: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:25.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.090046494s Jan 28 21:08:25.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:26.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:27.361: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.091936679s Jan 28 21:08:27.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:28.871: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:29.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.089312263s Jan 28 21:08:29.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:30.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:31.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.089603274s Jan 28 21:08:31.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:32.954: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:33.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.088974691s Jan 28 21:08:33.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:34.996: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:35.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.089916523s Jan 28 21:08:35.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:37.039: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:37.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.0908882s Jan 28 21:08:37.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:39.081: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:39.362: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.093615757s Jan 28 21:08:39.362: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:41.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:41.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.089850305s Jan 28 21:08:41.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:43.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:43.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.089081654s Jan 28 21:08:43.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:45.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:45.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.089567362s Jan 28 21:08:45.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:47.253: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:47.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.091707472s Jan 28 21:08:47.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:49.295: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:49.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.089701932s Jan 28 21:08:49.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:51.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 21:08:49 +0000 UTC}]. Failure Jan 28 21:08:51.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.089464632s Jan 28 21:08:51.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:53.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.088890768s Jan 28 21:08:53.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:53.381: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 21:08:49 +0000 UTC}]. Failure Jan 28 21:08:55.361: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.092214151s Jan 28 21:08:55.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:55.424: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:08:55.424: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-x44dw" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:08:55.424: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jq3j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:08:55.468: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jq3j": Phase="Running", Reason="", readiness=true. Elapsed: 43.593989ms Jan 28 21:08:55.468: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jq3j" satisfied condition "running and ready, or succeeded" Jan 28 21:08:55.468: INFO: Pod "metadata-proxy-v0.1-x44dw": Phase="Running", Reason="", readiness=true. Elapsed: 43.681522ms Jan 28 21:08:55.468: INFO: Pod "metadata-proxy-v0.1-x44dw" satisfied condition "running and ready, or succeeded" Jan 28 21:08:55.468: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:08:55.468: INFO: Reboot successful on node bootstrap-e2e-minion-group-jq3j Jan 28 21:08:57.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.090807235s Jan 28 21:08:57.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:59.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.089573435s Jan 28 21:08:59.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:01.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.089245703s Jan 28 21:09:01.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:03.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.089028307s Jan 28 21:09:03.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:05.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.089238258s Jan 28 21:09:05.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:07.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.089378948s Jan 28 21:09:07.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:09.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.089598773s Jan 28 21:09:09.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:11.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.089365797s Jan 28 21:09:11.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:13.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.088980637s Jan 28 21:09:13.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:15.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.08923902s Jan 28 21:09:15.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:17.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.089331198s Jan 28 21:09:17.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:19.377: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.108714428s Jan 28 21:09:19.378: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:21.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.090205158s Jan 28 21:09:21.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:23.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.089401907s Jan 28 21:09:23.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:25.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.089097875s Jan 28 21:09:25.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:27.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.090902855s Jan 28 21:09:27.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:29.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.089355277s Jan 28 21:09:29.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:31.364: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.094922311s Jan 28 21:09:31.364: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:33.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.088835062s Jan 28 21:09:33.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:35.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.08961707s Jan 28 21:09:35.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:37.362: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.093358738s Jan 28 21:09:37.362: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 7m6.069s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 5m0.001s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 5194 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc004c74600?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f03c2f33920?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f03c2f33920?, 0xc004c2e2c0}, {0x8147108?, 0xc0057f1040}, {0x7813648, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7f03c2f33920?, 0xc004c2e2c0?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc004c2e2c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 5196 [chan receive, 5 minutes] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x7f03c2f33920?, 0xc004c2e2c0}, {0x8147108?, 0xc0057f1040}, {0x76d190b, 0xb}, {0xc00590c2c0, 0x4, 0x4}, 0x45d964b800, ...) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReadyOrSucceeded(...) test/e2e/framework/pod/resource.go:508 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f03c2f33920, 0xc004c2e2c0}, {0x8147108, 0xc0057f1040}, {0x7ffce0f79600, 0x3}, {0xc0047d85e0, 0x1f}, {0x7813648, 0x37}) test/e2e/cloud/gcp/reboot.go:284 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 21:09:39.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.089723831s Jan 28 21:09:39.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:39.400: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.131057962s Jan 28 21:09:39.400: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:39.400: INFO: Pod kube-dns-autoscaler-5f6455f985-8gc49 failed to be running and ready, or succeeded. Jan 28 21:09:39.400: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-bs1f metadata-proxy-v0.1-2vpw5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8gc49] Jan 28 21:09:39.400: INFO: Status for not ready pod kube-system/kube-proxy-bootstrap-e2e-minion-group-bs1f: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:04:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:04:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP:10.138.0.5 PodIPs:[{IP:10.138.0.5}] StartTime:2023-01-28 20:52:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 1m20s restarting failed container=kube-proxy pod=kube-proxy-bootstrap-e2e-minion-group-bs1f_kube-system(22272a191c0d024a253f7f4807e9b7a0),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 21:02:43 +0000 UTC,FinishedAt:2023-01-28 21:04:10 +0000 UTC,ContainerID:containerd://b94e3b7ac3f96307dc51874fceeb109f2bf903790514ecbf152eea487c9c88e4,}} Ready:false RestartCount:6 Image:registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426 ImageID:sha256:7cfe96b1b0a6dab2250fd7fe9d39abd4ae7fc2b1797108dee1d98e2415ede8aa ContainerID:containerd://b94e3b7ac3f96307dc51874fceeb109f2bf903790514ecbf152eea487c9c88e4 Started:0xc005880b9f}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 21:09:39.458: INFO: Retrieving log for container kube-system/kube-proxy-bootstrap-e2e-minion-group-bs1f/kube-proxy: Jan 28 21:09:39.458: INFO: Retrieving log for the last terminated container kube-system/kube-proxy-bootstrap-e2e-minion-group-bs1f/kube-proxy: Jan 28 21:09:39.458: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:16 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:03:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:03:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:16 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP:10.64.2.31 PodIPs:[{IP:10.64.2.31}] StartTime:2023-01-28 20:52:16 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 2m40s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(e2c01da0-0a7c-4c95-a545-053747d26c71),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 21:02:09 +0000 UTC,FinishedAt:2023-01-28 21:03:29 +0000 UTC,ContainerID:containerd://ab623b950c5c773ffe66ebacbaea798d6883df61043fa116d0368f73d5c76e33,}} Ready:false RestartCount:7 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://ab623b950c5c773ffe66ebacbaea798d6883df61043fa116d0368f73d5c76e33 Started:0xc0058812ff}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 28 21:09:39.541: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller: I0128 21:06:13.377622 1 main.go:125] Version: v6.1.0 I0128 21:06:13.378435 1 main.go:168] Metrics path successfully registered at /metrics I0128 21:06:13.378534 1 main.go:174] Start NewCSISnapshotController with kubeconfig [] resyncPeriod [15m0s] I0128 21:06:13.391143 1 main.go:224] Metrics http server successfully started on :9102, /metrics I0128 21:06:13.391422 1 reflector.go:221] Starting reflector *v1.VolumeSnapshot (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 21:06:13.391441 1 reflector.go:257] Listing and watching *v1.VolumeSnapshot from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 21:06:13.391770 1 reflector.go:221] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:134 I0128 21:06:13.391786 1 reflector.go:257] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134 I0128 21:06:13.392150 1 reflector.go:221] Starting reflector *v1.VolumeSnapshotContent (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 21:06:13.392165 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 21:06:13.392459 1 reflector.go:221] Starting reflector *v1.VolumeSnapshotClass (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 21:06:13.392497 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 21:06:13.392742 1 snapshot_controller_base.go:152] Starting snapshot controller I0128 21:06:13.493752 1 shared_informer.go:285] caches populated I0128 21:06:13.493790 1 snapshot_controller_base.go:509] controller initialized Jan 28 21:09:39.541: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller: I0128 21:06:13.377622 1 main.go:125] Version: v6.1.0 I0128 21:06:13.378435 1 main.go:168] Metrics path successfully registered at /metrics I0128 21:06:13.378534 1 main.go:174] Start NewCSISnapshotController with kubeconfig [] resyncPeriod [15m0s] I0128 21:06:13.391143 1 main.go:224] Metrics http server successfully started on :9102, /metrics I0128 21:06:13.391422 1 reflector.go:221] Starting reflector *v1.VolumeSnapshot (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 21:06:13.391441 1 reflector.go:257] Listing and watching *v1.VolumeSnapshot from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 21:06:13.391770 1 reflector.go:221] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:134 I0128 21:06:13.391786 1 reflector.go:257] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134 I0128 21:06:13.392150 1 reflector.go:221] Starting reflector *v1.VolumeSnapshotContent (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 21:06:13.392165 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 21:06:13.392459 1 reflector.go:221] Starting reflector *v1.VolumeSnapshotClass (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 21:06:13.392497 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117 I0128 21:06:13.392742 1 snapshot_controller_base.go:152] Starting snapshot controller I0128 21:06:13.493752 1 shared_informer.go:285] caches populated I0128 21:06:13.493790 1 snapshot_controller_base.go:509] controller initialized Jan 28 21:09:39.541: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-8gc49: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:16 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:03:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 21:03:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 20:52:16 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP:10.64.2.32 PodIPs:[{IP:10.64.2.32}] StartTime:2023-01-28 20:52:16 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 1m20s restarting failed container=autoscaler pod=kube-dns-autoscaler-5f6455f985-8gc49_kube-system(62e323eb-96c0-4789-9d04-b84f1884a825),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 21:02:16 +0000 UTC,FinishedAt:2023-01-28 21:03:48 +0000 UTC,ContainerID:containerd://3ce8e3fd3ea8f40cfa81360d9e2f7ef99fef99b28079fdc56c05784bbbb0ae17,}} Ready:false RestartCount:4 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://3ce8e3fd3ea8f40cfa81360d9e2f7ef99fef99b28079fdc56c05784bbbb0ae17 Started:0xc0058808df}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 28 21:09:39.586: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-8gc49/autoscaler: I0128 21:07:58.715717 1 autoscaler.go:49] Scaling Namespace: kube-system, Target: deployment/coredns I0128 21:07:58.970158 1 plugin.go:50] Set control mode to linear I0128 21:07:58.970216 1 linear_controller.go:60] ConfigMap version change (old: new: 559) - rebuilding params I0128 21:07:58.970227 1 linear_controller.go:61] Params from apiserver: {"coresPerReplica":256,"includeUnschedulableNodes":true,"nodesPerReplica":16,"preventSinglePointFailure":true} I0128 21:07:58.970395 1 linear_controller.go:80] Defaulting min replicas count to 1 for linear controller Jan 28 21:09:39.586: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-8gc49/autoscaler: I0128 21:07:58.715717 1 autoscaler.go:49] Scaling Namespace: kube-system, Target: deployment/coredns I0128 21:07:58.970158 1 plugin.go:50] Set control mode to linear I0128 21:07:58.970216 1 linear_controller.go:60] ConfigMap version change (old: new: 559) - rebuilding params I0128 21:07:58.970227 1 linear_controller.go:61] Params from apiserver: {"coresPerReplica":256,"includeUnschedulableNodes":true,"nodesPerReplica":16,"preventSinglePointFailure":true} I0128 21:07:58.970395 1 linear_controller.go:80] Defaulting min replicas count to 1 for linear controller Jan 28 21:09:39.586: INFO: Node bootstrap-e2e-minion-group-bs1f failed reboot test. Jan 28 21:09:39.586: INFO: Node bootstrap-e2e-minion-group-g05r failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 21:09:39.587 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/28/23 21:09:39.587 (5m0.606s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 21:09:39.587 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 21:09:39.587 Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-6s4w8 to bootstrap-e2e-minion-group-g05r Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 959.482242ms (959.515491ms including waiting) Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container coredns Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container coredns Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container coredns Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container coredns Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Killing: Stopping container coredns Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-6s4w8_kube-system(924cbee6-6cd1-4108-a373-011bb84d0d00) Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Unhealthy: Readiness probe failed: Get "http://10.64.1.6:8181/ready": dial tcp 10.64.1.6:8181: connect: connection refused Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {node-controller } NodeNotReady: Node is not ready Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-6s4w8 Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Unhealthy: Liveness probe failed: Get "http://10.64.1.6:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Unhealthy: Readiness probe failed: Get "http://10.64.1.6:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container coredns Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-6s4w8: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container coredns Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {default-scheduler } FailedScheduling: 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-fhlmc to bootstrap-e2e-minion-group-bs1f Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.845509128s (3.845522943s including waiting) Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container coredns Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container coredns Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container coredns Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Readiness probe failed: Get "http://10.64.2.7:8181/ready": dial tcp 10.64.2.7:8181: connect: connection refused Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} NetworkNotReady: network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} FailedMount: MountVolume.SetUp failed for volume "config-volume" : object "kube-system"/"coredns" not registered Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container coredns Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container coredns Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container coredns Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-fhlmc Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Readiness probe failed: Get "http://10.64.2.17:8181/ready": dial tcp 10.64.2.17:8181: connect: connection refused Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-fhlmc_kube-system(05e23121-6d9c-4eff-9475-84347fef8c9a) Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {node-controller } NodeNotReady: Node is not ready Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Readiness probe failed: Get "http://10.64.2.20:8181/ready": dial tcp 10.64.2.20:8181: connect: connection refused Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {kubelet bootstrap-e2e-minion-group-bs1f} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f-fhlmc: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-fhlmc Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-fhlmc Jan 28 21:09:39.638: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-6s4w8 Jan 28 21:09:39.639: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 21:09:39.639: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 21:09:39.639: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 21:09:39.639: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 21:09:39.639: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 21:09:39.639: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 28 21:09:39.639: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 21:09:39.639: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(29ec3e483e58679ee5f59a6031c5e501) Jan 28 21:09:39.639: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 21:09:39.639: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 21:09:39.639: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 21:09:39.639: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 21:09:39.639: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(f4f6d281abb01fd97fbab9898b841ee8) Jan 28 21:09:39.639: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_6915b became leader Jan 28 21:09:39.639: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b0b3b became leader Jan 28 21:09:39.639: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_956fb became leader Jan 28 21:09:39.639: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1c8a7 became leader Jan 28 21:09:39.639: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_81052 became leader Jan 28 21:09:39.639: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_d293d became leader Jan 28 21:09:39.639: INFO: event for konnectivity-agent-fx6jw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-fx6jw to bootstrap-e2e-minion-group-bs1f Jan 28 21:09:39.639: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 21:09:39.639: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.49666404s (1.496680076s including waiting) Jan 28 21:09:39.639: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container konnectivity-agent Jan 28 21:09:39.639: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container konnectivity-agent Jan 28 21:09:39.639: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} NetworkNotReady: network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized Jan 28 21:09:39.639: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 21:09:39.639: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container konnectivity-agent Jan 28 21:09:39.639: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container konnectivity-agent Jan 28 21:09:39.639: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container konnectivity-agent Jan 28 21:09:39.639: INFO: event for konnectivity-agent-fx6jw: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-fx6jw_kube-system(904c0f67-24cb-4230-b7fd-e6127549e246) Jan 28 21:09:39.639: INFO: event for konnectivity-agent-fx6jw: {node-controller } NodeNotReady: Node is not ready Jan 28 21:09:39.639: INFO: event for konnectivity-agent-nxmx5: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-nxmx5 to bootstrap-e2e-minion-group-g05r Jan 28 21:09:39.639: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 21:09:39.639: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 578.085314ms (578.099087ms including waiting) Jan 28 21:09:39.639: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container konnectivity-agent Jan 28 21:09:39.639: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container konnectivity-agent Jan 28 21:09:39.639: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Killing: Stopping container konnectivity-agent Jan 28 21:09:39.639: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 21:09:39.639: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for konnectivity-agent-nxmx5: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-tqnn5 to bootstrap-e2e-minion-group-jq3j Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 617.125424ms (617.132787ms including waiting) Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container konnectivity-agent Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container konnectivity-agent Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container konnectivity-agent Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container konnectivity-agent Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {node-controller } NodeNotReady: Node is not ready Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {node-controller } NodeNotReady: Node is not ready Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container konnectivity-agent Jan 28 21:09:39.639: INFO: event for konnectivity-agent-tqnn5: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container konnectivity-agent Jan 28 21:09:39.639: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-fx6jw Jan 28 21:09:39.639: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-tqnn5 Jan 28 21:09:39.639: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-nxmx5 Jan 28 21:09:39.639: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 21:09:39.639: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 21:09:39.639: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 28 21:09:39.639: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://127.0.0.1:8133/healthz": dial tcp 127.0.0.1:8133: connect: connection refused Jan 28 21:09:39.639: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 28 21:09:39.639: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 28 21:09:39.639: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 28 21:09:39.639: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 28 21:09:39.639: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 28 21:09:39.639: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 28 21:09:39.639: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 21:09:39.639: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 28 21:09:39.639: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:09:39.639: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 28 21:09:39.639: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 28 21:09:39.639: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 28 21:09:39.639: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(f70ce176158303a9ebd031d7e3b6127a) Jan 28 21:09:39.639: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2c884380-0d8c-4b1f-849d-e60b28ae1c8f became leader Jan 28 21:09:39.639: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_75d5164c-e463-455f-9e1a-3bb8a975cbd4 became leader Jan 28 21:09:39.639: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ab3c6bc7-5479-4e73-b234-4f40535396e8 became leader Jan 28 21:09:39.639: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_853ac3a7-82a3-46e7-997a-15e8b0419ae3 became leader Jan 28 21:09:39.639: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ad0d5dff-dfe0-4a81-b527-c10da0dbc2c6 became leader Jan 28 21:09:39.639: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8d4eee79-b6ca-4ae7-b745-24984fc0ea26 became leader Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {default-scheduler } FailedScheduling: 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-8gc49 to bootstrap-e2e-minion-group-bs1f Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.293082344s (1.293090978s including waiting) Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container autoscaler Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container autoscaler Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} NetworkNotReady: network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container autoscaler Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container autoscaler Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container autoscaler Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-8gc49_kube-system(62e323eb-96c0-4789-9d04-b84f1884a825) Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-8gc49 Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {node-controller } NodeNotReady: Node is not ready Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985-8gc49: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-8gc49 Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-8gc49 Jan 28 21:09:39.639: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-bs1f_kube-system(22272a191c0d024a253f7f4807e9b7a0) Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-bs1f_kube-system(22272a191c0d024a253f7f4807e9b7a0) Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {kubelet bootstrap-e2e-minion-group-bs1f} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-bs1f: {node-controller } NodeNotReady: Node is not ready Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Killing: Stopping container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Killing: Stopping container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-g05r_kube-system(6b09ace535a17263444ad2960f4b8959) Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {node-controller } NodeNotReady: Node is not ready Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-g05r: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {node-controller } NodeNotReady: Node is not ready Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {node-controller } NodeNotReady: Node is not ready Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-proxy-bootstrap-e2e-minion-group-jq3j: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container kube-proxy Jan 28 21:09:39.639: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426" already present on machine Jan 28 21:09:39.639: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 28 21:09:39.639: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 28 21:09:39.639: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(51babbd1f81b742b53c210ccd4aba348) Jan 28 21:09:39.639: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_4bb17d63-51a5-4714-9ac3-79c98c6cd91e became leader Jan 28 21:09:39.639: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_93d26758-e2b8-479c-8d94-4cbc6d04d199 became leader Jan 28 21:09:39.639: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e17d3031-0f2a-47be-a494-7efe111f6476 became leader Jan 28 21:09:39.639: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_173d4f2f-433b-4a60-94fc-9a55200b0100 became leader Jan 28 21:09:39.639: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_5d80d9e0-ade4-460c-a91f-8ca7cbe3fb84 became leader Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {default-scheduler } FailedScheduling: 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-rlkx5 to bootstrap-e2e-minion-group-bs1f Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.450510051s (1.450555981s including waiting) Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container default-http-backend Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container default-http-backend Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} NetworkNotReady: network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container default-http-backend Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-rlkx5 Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container default-http-backend Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {node-controller } NodeNotReady: Node is not ready Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-rlkx5 Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Liveness probe failed: Get "http://10.64.2.13:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99-rlkx5: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 28 21:09:39.639: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-rlkx5 Jan 28 21:09:39.639: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 28 21:09:39.639: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 28 21:09:39.639: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 28 21:09:39.639: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 28 21:09:39.639: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-2dsmd to bootstrap-e2e-minion-group-g05r Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 717.875781ms (717.885972ms including waiting) Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.675684694s (1.675693368s including waiting) Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {node-controller } NodeNotReady: Node is not ready Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Created: Created container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2dsmd: {kubelet bootstrap-e2e-minion-group-g05r} Started: Started container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-2vpw5 to bootstrap-e2e-minion-group-bs1f Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 744.62103ms (744.638761ms including waiting) Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.836071057s (1.83608501s including waiting) Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {kubelet bootstrap-e2e-minion-group-bs1f} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-2vpw5: {node-controller } NodeNotReady: Node is not ready Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-hpcd7: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-hpcd7 to bootstrap-e2e-master Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 658.17817ms (658.185211ms including waiting) Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.237505268s (2.237512666s including waiting) Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-hpcd7: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-x44dw to bootstrap-e2e-minion-group-jq3j Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 744.675824ms (744.694272ms including waiting) Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.542318307s (1.54232719s including waiting) Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {node-controller } NodeNotReady: Node is not ready Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {node-controller } NodeNotReady: Node is not ready Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metadata-proxy Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1-x44dw: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container prometheus-to-sd-exporter Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-x44dw Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-2dsmd Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-2vpw5 Jan 28 21:09:39.639: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-hpcd7 Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {default-scheduler } FailedScheduling: 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-82bk2 to bootstrap-e2e-minion-group-bs1f Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 5.632375276s (5.632383215s including waiting) Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container metrics-server Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container metrics-server Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.113371361s (1.11338839s including waiting) Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container metrics-server-nanny Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container metrics-server-nanny Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container metrics-server Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container metrics-server-nanny Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Unhealthy: Readiness probe failed: Get "https://10.64.2.2:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c-82bk2: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-82bk2 Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-82bk2 Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-gk8n9 to bootstrap-e2e-minion-group-jq3j Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.305614898s (1.305632675s including waiting) Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 945.612734ms (945.652382ms including waiting) Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server-nanny Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server-nanny Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": dial tcp 10.64.0.3:10250: connect: connection refused Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": dial tcp 10.64.0.3:10250: connect: connection refused Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container metrics-server Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container metrics-server-nanny Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server-nanny Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server-nanny Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Liveness probe failed: Get "https://10.64.0.5:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Container metrics-server failed liveness probe, will be restarted Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": dial tcp 10.64.0.5:10250: connect: connection refused Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container metrics-server-nanny Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Killing: Stopping container metrics-server Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-gk8n9 Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-gk8n9 Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Created: Created container metrics-server-nanny Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Started: Started container metrics-server-nanny Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9-gk8n9: {kubelet bootstrap-e2e-minion-group-jq3j} Unhealthy: Readiness probe failed: Get "https://10.64.0.15:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-gk8n9 Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 28 21:09:39.639: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-bs1f Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.757791007s (2.757799348s including waiting) Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container volume-snapshot-controller Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container volume-snapshot-controller Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container volume-snapshot-controller Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(e2c01da0-0a7c-4c95-a545-053747d26c71) Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} NetworkNotReady: network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Created: Created container volume-snapshot-controller Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Started: Started container volume-snapshot-controller Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} Killing: Stopping container volume-snapshot-controller Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-bs1f} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(e2c01da0-0a7c-4c95-a545-053747d26c71) Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 28 21:09:39.639: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 21:09:39.639 (52ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 21:09:39.639 Jan 28 21:09:39.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 21:09:39.684 (45ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 21:09:39.684 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 21:09:39.684 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 21:09:39.684 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 21:09:39.685 STEP: Collecting events from namespace "reboot-9470". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 21:09:39.685 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/28/23 21:09:39.725 Jan 28 21:09:39.766: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 21:09:39.766: INFO: Jan 28 21:09:39.811: INFO: Logging node info for node bootstrap-e2e-master Jan 28 21:09:39.852: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 5a893541-4edb-4822-b656-8eb749851389 2263 0 2023-01-28 20:52:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 20:52:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-28 20:52:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-28 20:52:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 21:08:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 20:52:27 +0000 UTC,LastTransitionTime:2023-01-28 20:52:27 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:11 +0000 UTC,LastTransitionTime:2023-01-28 20:52:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:11 +0000 UTC,LastTransitionTime:2023-01-28 20:52:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:11 +0000 UTC,LastTransitionTime:2023-01-28 20:52:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 21:08:11 +0000 UTC,LastTransitionTime:2023-01-28 20:52:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.105.32.116,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f814f882cc154157460b3532a03d8644,SystemUUID:f814f882-cc15-4157-460b-3532a03d8644,BootID:6cb4da42-0e9f-4a20-86db-657430266c2b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:57552182,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 21:09:39.853: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 28 21:09:39.903: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 28 21:09:39.964: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-28 20:51:45 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:39.964: INFO: Container l7-lb-controller ready: true, restart count 7 Jan 28 21:09:39.964: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-28 20:51:27 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:39.964: INFO: Container kube-scheduler ready: true, restart count 4 Jan 28 21:09:39.964: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-28 20:51:28 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:39.964: INFO: Container konnectivity-server-container ready: true, restart count 1 Jan 28 21:09:39.964: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-28 20:51:28 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:39.964: INFO: Container kube-apiserver ready: true, restart count 1 Jan 28 21:09:39.964: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-28 20:51:45 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:39.964: INFO: Container kube-addon-manager ready: true, restart count 2 Jan 28 21:09:39.964: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-28 20:51:27 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:39.964: INFO: Container etcd-container ready: true, restart count 2 Jan 28 21:09:39.964: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-28 20:53:08 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:39.964: INFO: Container etcd-container ready: true, restart count 3 Jan 28 21:09:39.964: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-28 20:51:28 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:39.964: INFO: Container kube-controller-manager ready: false, restart count 6 Jan 28 21:09:39.964: INFO: metadata-proxy-v0.1-hpcd7 started at 2023-01-28 20:52:12 +0000 UTC (0+2 container statuses recorded) Jan 28 21:09:39.964: INFO: Container metadata-proxy ready: true, restart count 0 Jan 28 21:09:39.964: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 28 21:09:40.141: INFO: Latency metrics for node bootstrap-e2e-master Jan 28 21:09:40.142: INFO: Logging node info for node bootstrap-e2e-minion-group-bs1f Jan 28 21:09:40.184: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-bs1f 08692535-a320-4dcb-91ff-1fa0ba2828d7 2213 0 2023-01-28 20:52:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-bs1f kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 20:52:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 21:01:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-28 21:01:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-28 21:07:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-28 21:07:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce/us-west1-b/bootstrap-e2e-minion-group-bs1f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:26 +0000 UTC,LastTransitionTime:2023-01-28 20:55:23 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 20:52:16 +0000 UTC,LastTransitionTime:2023-01-28 20:52:16 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 21:07:31 +0000 UTC,LastTransitionTime:2023-01-28 21:02:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 21:07:31 +0000 UTC,LastTransitionTime:2023-01-28 21:02:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 21:07:31 +0000 UTC,LastTransitionTime:2023-01-28 21:02:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 21:07:31 +0000 UTC,LastTransitionTime:2023-01-28 21:02:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.154.4,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-bs1f.c.k8s-jkns-e2e-gce.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-bs1f.c.k8s-jkns-e2e-gce.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2ea9497a2a9005aa8e5e0f3ffad1e133,SystemUUID:2ea9497a-2a90-05aa-8e5e-0f3ffad1e133,BootID:a193f4d3-2147-447c-861e-3b0aa909997e,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 21:09:40.184: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-bs1f Jan 28 21:09:40.229: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-bs1f Jan 28 21:09:40.278: INFO: coredns-6846b5b5f-fhlmc started at 2023-01-28 20:52:16 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:40.278: INFO: Container coredns ready: true, restart count 7 Jan 28 21:09:40.278: INFO: kube-dns-autoscaler-5f6455f985-8gc49 started at 2023-01-28 20:52:16 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:40.278: INFO: Container autoscaler ready: false, restart count 6 Jan 28 21:09:40.278: INFO: volume-snapshot-controller-0 started at 2023-01-28 20:52:16 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:40.278: INFO: Container volume-snapshot-controller ready: false, restart count 8 Jan 28 21:09:40.278: INFO: metadata-proxy-v0.1-2vpw5 started at 2023-01-28 20:52:09 +0000 UTC (0+2 container statuses recorded) Jan 28 21:09:40.278: INFO: Container metadata-proxy ready: true, restart count 1 Jan 28 21:09:40.278: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 28 21:09:40.278: INFO: konnectivity-agent-fx6jw started at 2023-01-28 20:52:17 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:40.278: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 28 21:09:40.278: INFO: kube-proxy-bootstrap-e2e-minion-group-bs1f started at 2023-01-28 20:52:08 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:40.278: INFO: Container kube-proxy ready: false, restart count 7 Jan 28 21:09:40.278: INFO: l7-default-backend-8549d69d99-rlkx5 started at 2023-01-28 20:52:16 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:40.278: INFO: Container default-http-backend ready: true, restart count 2 Jan 28 21:09:40.440: INFO: Latency metrics for node bootstrap-e2e-minion-group-bs1f Jan 28 21:09:40.440: INFO: Logging node info for node bootstrap-e2e-minion-group-g05r Jan 28 21:09:40.482: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-g05r 87185a8f-bb27-450e-89e5-8951dac6f0bd 2354 0 2023-01-28 20:52:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-g05r kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 20:52:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 21:01:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-28 21:06:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 21:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 21:08:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce/us-west1-b/bootstrap-e2e-minion-group-g05r,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 21:06:07 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 21:06:07 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 21:06:07 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 21:06:07 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 21:06:07 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 21:06:07 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 21:06:07 +0000 UTC,LastTransitionTime:2023-01-28 21:06:06 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 20:52:16 +0000 UTC,LastTransitionTime:2023-01-28 20:52:16 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:02:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:02:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:02:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:08:50 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.227.18,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-g05r.c.k8s-jkns-e2e-gce.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-g05r.c.k8s-jkns-e2e-gce.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4bb7737aadc011adf7a719d3300fb8fa,SystemUUID:4bb7737a-adc0-11ad-f7a7-19d3300fb8fa,BootID:64830d10-7653-4a01-b0dd-43c6906fa52f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 21:09:40.483: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-g05r Jan 28 21:09:40.527: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-g05r Jan 28 21:09:40.592: INFO: konnectivity-agent-nxmx5 started at 2023-01-28 20:52:17 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:40.592: INFO: Container konnectivity-agent ready: false, restart count 1 Jan 28 21:09:40.592: INFO: coredns-6846b5b5f-6s4w8 started at 2023-01-28 20:52:21 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:40.592: INFO: Container coredns ready: true, restart count 4 Jan 28 21:09:40.592: INFO: kube-proxy-bootstrap-e2e-minion-group-g05r started at 2023-01-28 20:52:07 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:40.592: INFO: Container kube-proxy ready: true, restart count 6 Jan 28 21:09:40.592: INFO: metadata-proxy-v0.1-2dsmd started at 2023-01-28 20:52:08 +0000 UTC (0+2 container statuses recorded) Jan 28 21:09:40.592: INFO: Container metadata-proxy ready: true, restart count 2 Jan 28 21:09:40.592: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 28 21:09:40.757: INFO: Latency metrics for node bootstrap-e2e-minion-group-g05r Jan 28 21:09:40.757: INFO: Logging node info for node bootstrap-e2e-minion-group-jq3j Jan 28 21:09:40.812: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-jq3j 2b2b9937-135b-4df7-9d57-10f4c3abef5d 2390 0 2023-01-28 20:52:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-jq3j kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 20:52:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 21:05:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-28 21:07:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 21:08:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 21:08:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce/us-west1-b/bootstrap-e2e-minion-group-jq3j,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 21:07:33 +0000 UTC,LastTransitionTime:2023-01-28 21:07:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 20:52:16 +0000 UTC,LastTransitionTime:2023-01-28 20:52:16 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:08:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:08:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:08:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 21:08:50 +0000 UTC,LastTransitionTime:2023-01-28 21:08:50 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.247.4.220,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-jq3j.c.k8s-jkns-e2e-gce.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-jq3j.c.k8s-jkns-e2e-gce.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de3b20bd84cffdb49aa767a4d3b2d6b6,SystemUUID:de3b20bd-84cf-fdb4-9aa7-67a4d3b2d6b6,BootID:671de56a-7689-4498-b6c5-8a1a18405efe,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.71+86455ae12e0426,KubeProxyVersion:v1.27.0-alpha.1.71+86455ae12e0426,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.71_86455ae12e0426],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 21:09:40.812: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-jq3j Jan 28 21:09:40.857: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-jq3j Jan 28 21:09:40.924: INFO: konnectivity-agent-tqnn5 started at 2023-01-28 20:52:17 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:40.924: INFO: Container konnectivity-agent ready: true, restart count 4 Jan 28 21:09:40.924: INFO: metrics-server-v0.5.2-867b8754b9-gk8n9 started at 2023-01-28 20:52:40 +0000 UTC (0+2 container statuses recorded) Jan 28 21:09:40.924: INFO: Container metrics-server ready: true, restart count 8 Jan 28 21:09:40.924: INFO: Container metrics-server-nanny ready: true, restart count 8 Jan 28 21:09:40.924: INFO: kube-proxy-bootstrap-e2e-minion-group-jq3j started at 2023-01-28 20:52:07 +0000 UTC (0+1 container statuses recorded) Jan 28 21:09:40.924: INFO: Container kube-proxy ready: true, restart count 3 Jan 28 21:09:40.924: INFO: metadata-proxy-v0.1-x44dw started at 2023-01-28 20:52:08 +0000 UTC (0+2 container statuses recorded) Jan 28 21:09:40.924: INFO: Container metadata-proxy ready: true, restart count 2 Jan 28 21:09:40.924: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 28 21:09:41.093: INFO: Latency metrics for node bootstrap-e2e-minion-group-jq3j END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 21:09:41.093 (1.409s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 21:09:41.093 (1.409s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 21:09:41.093 STEP: Destroying namespace "reboot-9470" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 21:09:41.093 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 21:09:41.137 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 21:09:41.138 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 21:09:41.138 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 21:09:39.587
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:02:32.913 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 21:02:32.913 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:02:32.913 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 21:02:32.913 Jan 28 21:02:32.913: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 21:02:32.914 Jan 28 21:02:32.953: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:34.993: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:36.994: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:38.995: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:40.993: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:42.993: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:44.993: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:46.993: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:48.995: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:50.993: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:52.993: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:54.994: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:56.993: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:02:58.995: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused Jan 28 21:03:00.994: INFO: Unexpected error while creating namespace: Post "https://34.105.32.116/api/v1/namespaces": dial tcp 34.105.32.116:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 21:04:38.819 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 21:04:38.899 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 21:04:38.981 (2m6.068s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 21:04:38.981 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 21:04:38.981 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/28/23 21:04:38.981 Jan 28 21:04:39.166: INFO: Getting bootstrap-e2e-minion-group-jq3j Jan 28 21:04:39.166: INFO: Getting bootstrap-e2e-minion-group-bs1f Jan 28 21:04:39.166: INFO: Getting bootstrap-e2e-minion-group-g05r Jan 28 21:04:39.208: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-jq3j condition Ready to be true Jan 28 21:04:39.226: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-g05r condition Ready to be true Jan 28 21:04:39.226: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-bs1f condition Ready to be true Jan 28 21:04:39.250: INFO: Node bootstrap-e2e-minion-group-jq3j has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:04:39.250: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:04:39.250: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-x44dw" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:04:39.250: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jq3j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:04:39.269: INFO: Node bootstrap-e2e-minion-group-bs1f has 4 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-bs1f metadata-proxy-v0.1-2vpw5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8gc49] Jan 28 21:04:39.269: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-bs1f metadata-proxy-v0.1-2vpw5 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8gc49] Jan 28 21:04:39.269: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-8gc49" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:04:39.269: INFO: Node bootstrap-e2e-minion-group-g05r has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-g05r metadata-proxy-v0.1-2dsmd] Jan 28 21:04:39.269: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-g05r metadata-proxy-v0.1-2dsmd] Jan 28 21:04:39.269: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:04:39.269: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-bs1f" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:04:39.269: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-2vpw5" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:04:39.269: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-2dsmd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:04:39.269: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-g05r" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:04:39.293: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jq3j": Phase="Running", Reason="", readiness=true. Elapsed: 42.866809ms Jan 28 21:04:39.293: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jq3j" satisfied condition "running and ready, or succeeded" Jan 28 21:04:39.293: INFO: Pod "metadata-proxy-v0.1-x44dw": Phase="Running", Reason="", readiness=true. Elapsed: 42.979318ms Jan 28 21:04:39.293: INFO: Pod "metadata-proxy-v0.1-x44dw" satisfied condition "running and ready, or succeeded" Jan 28 21:04:39.293: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:04:39.293: INFO: Getting external IP address for bootstrap-e2e-minion-group-jq3j Jan 28 21:04:39.293: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-jq3j(35.247.4.220:22) Jan 28 21:04:39.316: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.83529ms Jan 28 21:04:39.316: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:39.316: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 47.279968ms Jan 28 21:04:39.316: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:39.326: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 56.248504ms Jan 28 21:04:39.326: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:39.326: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g05r": Phase="Running", Reason="", readiness=true. Elapsed: 56.145454ms Jan 28 21:04:39.326: INFO: Pod "metadata-proxy-v0.1-2vpw5": Phase="Running", Reason="", readiness=true. Elapsed: 56.314802ms Jan 28 21:04:39.326: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-g05r" satisfied condition "running and ready, or succeeded" Jan 28 21:04:39.326: INFO: Pod "metadata-proxy-v0.1-2vpw5" satisfied condition "running and ready, or succeeded" Jan 28 21:04:39.326: INFO: Pod "metadata-proxy-v0.1-2dsmd": Phase="Running", Reason="", readiness=true. Elapsed: 56.235743ms Jan 28 21:04:39.326: INFO: Pod "metadata-proxy-v0.1-2dsmd" satisfied condition "running and ready, or succeeded" Jan 28 21:04:39.326: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-g05r metadata-proxy-v0.1-2dsmd] Jan 28 21:04:39.326: INFO: Getting external IP address for bootstrap-e2e-minion-group-g05r Jan 28 21:04:39.326: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-g05r(34.168.227.18:22) Jan 28 21:04:39.817: INFO: ssh prow@35.247.4.220:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 28 21:04:39.817: INFO: ssh prow@35.247.4.220:22: stdout: "" Jan 28 21:04:39.817: INFO: ssh prow@35.247.4.220:22: stderr: "" Jan 28 21:04:39.817: INFO: ssh prow@35.247.4.220:22: exit code: 0 Jan 28 21:04:39.817: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-jq3j condition Ready to be false Jan 28 21:04:39.845: INFO: ssh prow@34.168.227.18:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 28 21:04:39.845: INFO: ssh prow@34.168.227.18:22: stdout: "" Jan 28 21:04:39.845: INFO: ssh prow@34.168.227.18:22: stderr: "" Jan 28 21:04:39.845: INFO: ssh prow@34.168.227.18:22: exit code: 0 Jan 28 21:04:39.845: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-g05r condition Ready to be false Jan 28 21:04:39.860: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:39.887: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:41.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.089445547s Jan 28 21:04:41.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:41.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2.090141645s Jan 28 21:04:41.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:41.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 2.098976603s Jan 28 21:04:41.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:41.904: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:41.930: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:43.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4.091467339s Jan 28 21:04:43.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:43.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.092214783s Jan 28 21:04:43.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:43.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 4.099794909s Jan 28 21:04:43.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:43.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:43.977: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:45.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 6.090473329s Jan 28 21:04:45.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:45.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.091214639s Jan 28 21:04:45.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:45.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 6.098745952s Jan 28 21:04:45.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:45.992: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:46.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:47.362: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 8.093046738s Jan 28 21:04:47.362: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:47.363: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.09387138s Jan 28 21:04:47.363: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:47.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 8.099295564s Jan 28 21:04:47.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:48.040: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:48.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:49.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 10.090431674s Jan 28 21:04:49.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:49.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.091437385s Jan 28 21:04:49.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:49.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 10.098833473s Jan 28 21:04:49.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:50.084: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:50.106: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:51.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 12.091098046s Jan 28 21:04:51.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:51.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.091996832s Jan 28 21:04:51.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:51.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 12.098950705s Jan 28 21:04:51.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:52.127: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:52.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:53.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 14.08986461s Jan 28 21:04:53.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.089459024s Jan 28 21:04:53.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:53.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:53.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 14.098782589s Jan 28 21:04:53.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:54.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:54.193: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:55.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 16.089798025s Jan 28 21:04:55.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:55.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.090818982s Jan 28 21:04:55.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:55.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 16.099540563s Jan 28 21:04:55.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:56.215: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:56.238: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:57.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 18.090276537s Jan 28 21:04:57.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:57.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.090009448s Jan 28 21:04:57.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:57.372: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 18.103146314s Jan 28 21:04:57.372: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:04:58.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:58.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:04:59.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.089459195s Jan 28 21:04:59.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 20.089883633s Jan 28 21:04:59.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:59.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:04:59.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 20.099360903s Jan 28 21:04:59.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:00.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:00.323: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:01.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 22.090450186s Jan 28 21:05:01.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.090066616s Jan 28 21:05:01.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:01.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:01.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 22.099367774s Jan 28 21:05:01.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:02.344: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:02.366: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:03.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 24.089887562s Jan 28 21:05:03.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:03.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.090739035s Jan 28 21:05:03.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:03.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 24.099416185s Jan 28 21:05:03.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:04.387: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:04.409: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:05.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 26.090266602s Jan 28 21:05:05.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:05.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.091141323s Jan 28 21:05:05.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:05.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 26.098503319s Jan 28 21:05:05.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:06.430: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:06.451: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:07.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.092282831s Jan 28 21:05:07.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:07.362: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 28.092792387s Jan 28 21:05:07.362: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:07.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 28.100202904s Jan 28 21:05:07.370: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:08.473: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:08.494: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:09.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 30.09062821s Jan 28 21:05:09.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:48 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:09.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.091441154s Jan 28 21:05:09.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:09.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 30.099420216s Jan 28 21:05:09.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:10.515: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:10.536: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:11.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 32.090333063s Jan 28 21:05:11.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:11.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.091288471s Jan 28 21:05:11.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:11.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 32.098896117s Jan 28 21:05:11.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:12.558: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:12.579: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:13.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.089362161s Jan 28 21:05:13.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:13.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 34.089881214s Jan 28 21:05:13.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:13.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 34.099427127s Jan 28 21:05:13.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:14.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:14.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:15.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 36.089877764s Jan 28 21:05:15.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:15.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.090569192s Jan 28 21:05:15.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:15.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 36.099722934s Jan 28 21:05:15.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:16.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:16.665: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:17.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.090773547s Jan 28 21:05:17.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:17.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 38.091317626s Jan 28 21:05:17.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:17.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 38.099748863s Jan 28 21:05:17.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:18.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:18.708: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:19.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 40.090309825s Jan 28 21:05:19.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:19.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.091386375s Jan 28 21:05:19.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:19.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 40.098479206s Jan 28 21:05:19.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:20.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:20.753: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:21.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 42.090453708s Jan 28 21:05:21.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:21.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.091367565s Jan 28 21:05:21.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:21.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 42.098967666s Jan 28 21:05:21.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:22.770: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:22.796: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:23.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 44.09037615s Jan 28 21:05:23.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:23.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.091032037s Jan 28 21:05:23.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:23.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 44.099394502s Jan 28 21:05:23.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:24.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:24.840: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:25.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 46.090422349s Jan 28 21:05:25.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:25.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.091552937s Jan 28 21:05:25.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:25.369: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 46.099376833s Jan 28 21:05:25.369: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:26.857: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:26.882: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:27.361: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 48.09189296s Jan 28 21:05:27.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:27.362: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.092955474s Jan 28 21:05:27.362: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:27.372: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 48.103150852s Jan 28 21:05:27.372: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:28.899: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-jq3j condition Ready to be true Jan 28 21:05:28.926: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:28.944: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:29.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 50.090023319s Jan 28 21:05:29.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:29.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.091076379s Jan 28 21:05:29.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:29.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 50.099005877s Jan 28 21:05:29.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:30.979: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:30.986: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:31.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 52.091417871s Jan 28 21:05:31.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:31.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.092274322s Jan 28 21:05:31.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:31.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 52.098993504s Jan 28 21:05:31.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:33.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:33.029: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:33.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 54.089963629s Jan 28 21:05:33.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:33.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.090880425s Jan 28 21:05:33.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:33.367: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 54.098191553s Jan 28 21:05:33.367: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:35.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:35.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:35.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 56.089821499s Jan 28 21:05:35.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:35.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.091011264s Jan 28 21:05:35.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:35.368: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=false. Elapsed: 56.098693614s Jan 28 21:05:35.368: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-bs1f' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:04:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:08 +0000 UTC }] Jan 28 21:05:37.128: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:37.147: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:37.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 58.091150661s Jan 28 21:05:37.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:37.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.092112717s Jan 28 21:05:37.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:37.370: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f": Phase="Running", Reason="", readiness=true. Elapsed: 58.100271929s Jan 28 21:05:37.370: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-bs1f" satisfied condition "running and ready, or succeeded" Jan 28 21:05:39.172: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:39.189: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:39.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.089891342s Jan 28 21:05:39.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:39.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.09040255s Jan 28 21:05:39.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:41.214: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:41.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:41.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.090192749s Jan 28 21:05:41.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:41.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.091224351s Jan 28 21:05:41.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:43.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:43.278: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:43.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.090552905s Jan 28 21:05:43.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:43.361: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.09175854s Jan 28 21:05:43.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:45.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:45.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:45.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.090294952s Jan 28 21:05:45.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.089890119s Jan 28 21:05:45.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:45.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:47.344: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:47.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.089685023s Jan 28 21:05:47.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:47.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.090563036s Jan 28 21:05:47.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:47.364: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:49.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.08964607s Jan 28 21:05:49.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:49.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.090042602s Jan 28 21:05:49.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:49.386: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:49.407: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:51.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.089613599s Jan 28 21:05:51.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:51.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.090404168s Jan 28 21:05:51.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:51.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:51.450: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:53.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.090066034s Jan 28 21:05:53.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:53.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.090899929s Jan 28 21:05:53.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:53.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:53.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:55.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.089696714s Jan 28 21:05:55.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:55.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.09051931s Jan 28 21:05:55.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:55.514: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:55.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:57.390: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.120881101s Jan 28 21:05:57.390: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:57.391: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.12170562s Jan 28 21:05:57.391: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:57.558: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:57.578: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:05:59.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.090123345s Jan 28 21:05:59.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:59.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.090581363s Jan 28 21:05:59.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:05:59.601: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:05:59.621: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:01.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.090445226s Jan 28 21:06:01.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.090034344s Jan 28 21:06:01.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:01.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:01.643: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:01.663: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:03.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.090421984s Jan 28 21:06:03.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:03.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.091297831s Jan 28 21:06:03.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:03.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:03.706: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:05.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.089658901s Jan 28 21:06:05.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:05.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.090124621s Jan 28 21:06:05.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:05.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:05.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:07.358: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.089213046s Jan 28 21:06:07.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.08962737s Jan 28 21:06:07.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:07.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:07.773: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:07.793: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:09.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.09034538s Jan 28 21:06:09.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:09.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.090904084s Jan 28 21:06:09.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:09.815: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:09.837: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:11.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.090011953s Jan 28 21:06:11.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:11.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.091223755s Jan 28 21:06:11.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:11.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:11.881: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:13.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.089602048s Jan 28 21:06:13.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:13.360: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.090654129s Jan 28 21:06:13.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:03:29 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:13.901: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:13.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:15.359: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 1m36.089417793s Jan 28 21:06:15.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.089823333s Jan 28 21:06:15.359: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 21:06:15.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:15.943: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:15.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:17.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.089964177s Jan 28 21:06:17.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:18.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:18.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:19.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.090054693s Jan 28 21:06:19.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:20.068: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:20.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:21.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.089550071s Jan 28 21:06:21.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:22.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:22.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:23.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.089538394s Jan 28 21:06:23.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:24.158: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:24.158: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:25.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.088923604s Jan 28 21:06:25.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:26.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:26.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:27.362: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.093350406s Jan 28 21:06:27.362: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:28.246: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:28.246: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:29.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.089929354s Jan 28 21:06:29.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:30.289: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:30.289: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:31.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.089614038s Jan 28 21:06:31.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:32.333: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:32.333: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:33.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.0890961s Jan 28 21:06:33.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:34.378: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:34.378: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:35.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.088988114s Jan 28 21:06:35.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:36.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:36.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:37.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.090838592s Jan 28 21:06:37.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:38.466: INFO: Condition Ready of node bootstrap-e2e-minion-group-g05r is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 21:06:38.466: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:39.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.089243372s Jan 28 21:06:39.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:40.466: INFO: Node bootstrap-e2e-minion-group-g05r didn't reach desired Ready condition status (false) within 2m0s Jan 28 21:06:40.508: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:41.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.089087218s Jan 28 21:06:41.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:42.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:43.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.089280226s Jan 28 21:06:43.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:44.595: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:45.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.088907272s Jan 28 21:06:45.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:46.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:47.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.091282924s Jan 28 21:06:47.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:48.682: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:49.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.089377694s Jan 28 21:06:49.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:50.724: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:51.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.089931828s Jan 28 21:06:51.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:52.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:53.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.089912804s Jan 28 21:06:53.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:54.811: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:55.357: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.088541063s Jan 28 21:06:55.357: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:56.854: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:57.362: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.093365649s Jan 28 21:06:57.362: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:06:58.896: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:06:59.357: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.088652278s Jan 28 21:06:59.357: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:00.939: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:01.357: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.088718895s Jan 28 21:07:01.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:02.981: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:03.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.089664516s Jan 28 21:07:03.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:05.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:05.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.089482809s Jan 28 21:07:05.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:07.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:07.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.091273248s Jan 28 21:07:07.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:09.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:09.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.089181225s Jan 28 21:07:09.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:11.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:11.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.089575021s Jan 28 21:07:11.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:13.207: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:13.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.089615639s Jan 28 21:07:13.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:15.250: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:15.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.089067176s Jan 28 21:07:15.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:17.295: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:17.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.091029085s Jan 28 21:07:17.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:19.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:19.357: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.088631014s Jan 28 21:07:19.357: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:21.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.090232179s Jan 28 21:07:21.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:21.381: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:23.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.089095586s Jan 28 21:07:23.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:23.423: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:25.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.089083414s Jan 28 21:07:25.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:25.467: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:27.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.091072048s Jan 28 21:07:27.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:27.510: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:29.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m50.089925684s Jan 28 21:07:29.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:29.553: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:31.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m52.089404646s Jan 28 21:07:31.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:31.596: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:33.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m54.088807072s Jan 28 21:07:33.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:33.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:35.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m56.089258529s Jan 28 21:07:35.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:35.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:37.361: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 2m58.092154924s Jan 28 21:07:37.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:37.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:39.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m0.089725293s Jan 28 21:07:39.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:39.768: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:41.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m2.088894447s Jan 28 21:07:41.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:41.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:43.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m4.089021804s Jan 28 21:07:43.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:43.855: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:45.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m6.088751637s Jan 28 21:07:45.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:45.904: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:47.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m8.089778001s Jan 28 21:07:47.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:47.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:49.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m10.089586503s Jan 28 21:07:49.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:50.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:51.357: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.088695125s Jan 28 21:07:51.357: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:52.077: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:53.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.08929565s Jan 28 21:07:53.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:54.120: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:55.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.089279155s Jan 28 21:07:55.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:56.163: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:57.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.090738864s Jan 28 21:07:57.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:07:58.206: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:07:59.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.089507207s Jan 28 21:07:59.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:05:11 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:00.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:01.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.089735021s Jan 28 21:08:01.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:02.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:03.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.088883218s Jan 28 21:08:03.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:04.335: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:05.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.088897325s Jan 28 21:08:05.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:06.377: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:07.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.089663138s Jan 28 21:08:07.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:08.421: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:09.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.089378573s Jan 28 21:08:09.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:10.463: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:11.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.089021658s Jan 28 21:08:11.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:12.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:13.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.08914038s Jan 28 21:08:13.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:14.550: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:15.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.08885245s Jan 28 21:08:15.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:16.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:17.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.090598152s Jan 28 21:08:17.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:18.635: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:19.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.089346438s Jan 28 21:08:19.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:20.681: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:21.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.090291378s Jan 28 21:08:21.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:22.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:23.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.089137322s Jan 28 21:08:23.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:24.768: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:25.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.090046494s Jan 28 21:08:25.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:26.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:27.361: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.091936679s Jan 28 21:08:27.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:28.871: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:29.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.089312263s Jan 28 21:08:29.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:30.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:31.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.089603274s Jan 28 21:08:31.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:32.954: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:33.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.088974691s Jan 28 21:08:33.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:34.996: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:35.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.089916523s Jan 28 21:08:35.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:37.039: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:37.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.0908882s Jan 28 21:08:37.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:39.081: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:39.362: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.093615757s Jan 28 21:08:39.362: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:41.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:41.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.089850305s Jan 28 21:08:41.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:43.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:43.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.089081654s Jan 28 21:08:43.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:45.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:45.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.089567362s Jan 28 21:08:45.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:47.253: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:47.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.091707472s Jan 28 21:08:47.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:49.295: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 21:08:49.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.089701932s Jan 28 21:08:49.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:51.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 21:08:49 +0000 UTC}]. Failure Jan 28 21:08:51.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.089464632s Jan 28 21:08:51.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:53.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.088890768s Jan 28 21:08:53.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:53.381: INFO: Condition Ready of node bootstrap-e2e-minion-group-jq3j is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 21:08:49 +0000 UTC}]. Failure Jan 28 21:08:55.361: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.092214151s Jan 28 21:08:55.361: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:55.424: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:08:55.424: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-x44dw" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:08:55.424: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-jq3j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 21:08:55.468: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jq3j": Phase="Running", Reason="", readiness=true. Elapsed: 43.593989ms Jan 28 21:08:55.468: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-jq3j" satisfied condition "running and ready, or succeeded" Jan 28 21:08:55.468: INFO: Pod "metadata-proxy-v0.1-x44dw": Phase="Running", Reason="", readiness=true. Elapsed: 43.681522ms Jan 28 21:08:55.468: INFO: Pod "metadata-proxy-v0.1-x44dw" satisfied condition "running and ready, or succeeded" Jan 28 21:08:55.468: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-jq3j metadata-proxy-v0.1-x44dw] Jan 28 21:08:55.468: INFO: Reboot successful on node bootstrap-e2e-minion-group-jq3j Jan 28 21:08:57.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.090807235s Jan 28 21:08:57.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:08:59.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.089573435s Jan 28 21:08:59.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:01.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.089245703s Jan 28 21:09:01.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:03.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.089028307s Jan 28 21:09:03.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:05.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.089238258s Jan 28 21:09:05.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:07.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.089378948s Jan 28 21:09:07.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:09.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.089598773s Jan 28 21:09:09.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:11.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.089365797s Jan 28 21:09:11.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:13.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.088980637s Jan 28 21:09:13.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:15.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.08923902s Jan 28 21:09:15.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:17.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.089331198s Jan 28 21:09:17.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:19.377: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.108714428s Jan 28 21:09:19.378: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:21.359: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.090205158s Jan 28 21:09:21.359: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:23.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.089401907s Jan 28 21:09:23.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:25.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.089097875s Jan 28 21:09:25.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:27.360: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.090902855s Jan 28 21:09:27.360: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:29.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.089355277s Jan 28 21:09:29.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:31.364: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.094922311s Jan 28 21:09:31.364: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:33.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.088835062s Jan 28 21:09:33.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC }] Jan 28 21:09:35.358: INFO: Pod "kube-dns-autoscaler-5f6455f985-8gc49": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.08961707s Jan 28 21:09:35.358: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-8gc49' on 'bootstrap-e2e-minion-group-bs1f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 20:52:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 21:08:00 +0000 UTC ContainersNotReady containe